UK Web Archive blog

Introduction

News and views from the British Library’s web archiving team and guests. Posts about the public UK Web Archive, and since April 2013, about web archiving as part as non-print legal deposit. Editor-in-chief: Jason Webber. Read more

17 August 2016

Tender to Redevelop the UK Web Archive Website

The UK Web Archive (based at The British Library) is looking to appoint a superb User Experience (UX) company to help us improve our Web Archive service to the public and facilitate high quality academic research.

The project should be open source and bring an innovative and engaging interface to our web archive collections. The project should also integrate with our work on full-text search and trend analysis (see www.webarchive.org.uk/shine).

For a copy of the Invitation to Tender and how to respond please visit the BL eTendering Portal at the following link:

https://bl.bravosolution.co.uk; at the Home Page, under ‘Opportunities and notices’ please click on ‘View current opportunities and notices’.

If you wish to view/download the documents and are not registered on Bravo, please follow the instructions below (registration is free). If you are register please go to step 2.

1. Register your company on the e-tendering portal (this is only required once).

Select the ‘Login or register to participate’ link above and click the ‘Click here to register’ link on the home page.

Accept the terms and conditions and click continue,
Enter your correct business and user details,
Note the username you chose and click Save when complete,
You will shortly receive an email with your unique password (please keep this secure).

2. Respond to the ITT

Login to the portal with the username/password,
Click the ITT's Open To All Suppliers link.
Click on the relevant Tender
Click the Express Interest button in the Actions box on the left-hand side of the page,
This will move the ITT into your My ITT's page. (This is a secure area reserved for your projects only),
You can now access any attachments by clicking the Settings and Buyer Attachments in the Actions box

3. Responding to the ITT

You can now choose to Reply or Reject (please give a reason if rejecting),
You must use the Messages function to communicate to the Library and seek any clarification,
Note the deadline for completion, then follow the onscreen instructions to complete your response to the ITT.

You must then publish your reply using the publish button in the Actions box on the left-hand side of the page.

Note: If you have any questions regarding the tender, please do so through the portal.

By Jason Webber, Web Archiving Engagement Manager

27 June 2016

Capturing and Preserving the EU Referendum Debate (Brexit)

Following the announcement in May 2015 that there would be a referendum on the UK’s EU membership; the Legal Deposit UK Web Archive, led by curators at the Bodleian Libraries, started a collection of websites.

The team of curators includes contributors from the Bodleian Libraries, The British Library, the National Libraries of Scotland and Wales and also Queen’s University Belfast (for the Northern Ireland perspective) and the London School of Economics (for capturing and preserving individual documents, such as the pdf versions of campaigning leaflets). 

The collection scope is to capture the ‘Brexit’ debate and the debate around the EU Referendum as well as the wider context of UK/EU relations, including:

  • Media coverage,
  • websites of political parties and other political institutions and groups
  • campaigning and lobbying
  • trade unions, professional organisations, businesses
  • academic debate
  • culture and arts
  • public opinion through blogs, comments, and if possible social media.

We primarily archive UK websites under the Non-Print Legal Deposit mandate, but also decided to include some sites outside the UK, if relevant – e.g. websites of UK expats in Europe, or political parties, interest groups and think tanks in the EU and in EU member states – on a permission basis.

The collection (at the time of writing) has 2590 target websites. Some of these are whole websites; others will be a single news story or blog post.

Access and availability
The majority of the collection will be available in the reading rooms of UK Legal Deposit libraries, including both British Library sites. As is usual for web archive collections, there is a delay between collection and availability of up to a year.

By Svenja Kunze, Project Archivist, Bodleian Libraries (Oxford University)

17 May 2016

Saving BBC Recipes Website

There's been much coverage today of plans to remove the recipe pages from the BBC website.

6018503713_573fccc22a_z

The UK Web Archive has been collecting selected pages from the BBC, mainly news, for over ten years and since 2013 we have attempted to capture the entirety of the BBC web estate. A small number of pages are available on the Open UK Web Archive website. Most of the BBC's online presence, however, is only available in the reading rooms of UK Legal Deposit libraries, including both of the British Library sites at St. Pancras and Boston Spa in Yorkshire.

We have today instigated a further crawl of the BBC website with the specific aim of ensuring that we save the recipes from the food pages. We can also report that the Internet Archive, Library of Alexandria and the National Library of Iceland have also captured these pages so their future is assured.

Polly Russell, British Library Curator and Food Historian says 

"Cookery books, like cookery websites, obviously serve a practical purpose but that is not all. For historians, sociologists and anthropologists they also tell us about people's culinary aspirations and anxieties, cultural tastes and trends, dietary preoccupations, social expectations and economic conditions. They are, therefore, a rich source for researchers. So while it's sad news to hear about plans to close the much trusted and well-loved BBC Food website, it's a relief that the British Library is going to be able to archive the website for posterity."

 

 

26 April 2016

Easter Rising 1916 Centenary in Print and Digital

Ireland has been gripped by  commemorations of the Easter Rising in the last month. The Rising took place from the 24th April to the 29th April 1916 in Dublin. A packed programme of events and activities took place across Ireland and in Irish communities further afield to commemorate this centenary.

In March 2016, addressing a colloquium at the Bodleian Library, Oxford, the Irish Ambassador to the United Kingdom, his Excellency Daniel Mulhall, emphasised the transnational and inclusive nature of the commemoration programme in his opening remarks. The 1916 Rising had a global impact with ripples felt as far as Asia and India. This is reflected in the range of events taking place in the United Kingdom, supported by the Irish Embassy.

In military terms the Rising was a failure and had consequences for the people of Dublin with 415 people killed, the majority of whom were civilians.

Print
Turning to the documentation of the Rising, there are a number of interesting documents within the Library’s collections relating to the Rising. The British Library does not hold an original broadside of the Proclamation of an Irish Republic. Nevertheless, later examples of the document were acquired retrospectively.

The earliest example of a version of the proclamation in the British Library’s collections, can be found at C.S.A.24/3.(1.). This is interesting from a bibliographical stand point because it is the first entry under the new heading in the British Library Printed Catalogue to 1975:

ProvisionalGovernmentEntryBLPC

Provisional Government of the Irish Republic 1916. Miscellaneous Public documents. 

That the Library classified this proclamation as a public document and gave the document the C.S.A., official publication pressmark prefix, which originates from the 1890s, is of particular interest.  The third factor which is of interest is that this version of the proclamation is the only item in the green bound guard-book which is embossed on the spine in gold.

Poblacht na heireann1916

IRELAND. PROCLAMATIONS, ETC.

Although the red (purchase) stamp appears on the reverse of the document, because of the way it has been mounted in the volume it is unclear when the item was acquired. It appears to read 15 May ‘59. The volume itself bears the British Museum binders stamp B.M.1961 on the inside of the rear board. These dates indicate that this item, as with other ephemera relating to 1916 Rebellion, was acquired retrospectively. 

Poblacht na heireann 1941

The second example of the proclamation is a more ornate affair. It is a single sheet dating from 1941, measuring approximately 325mm x 255mm. The text of the document is laid out in the same fashion as the original, but the type face has been standardised, removing the anomalies from the original, and the list of signatories has been centred rather than justified to the right as in the original. What is most striking about this item are the portraits of the seven signatories surrounding the text and connected by the decorative boarder. At the bottom centre surround in a circle is the Irish Army sunburst emblem, designed by Eion MacNeill, and interestingly it is reproduced without the inscription "Óglaigh na hÉireann" or Irish Volunteers.

Irish War News Irish War News p4

The third document is a piece of contemporary ephemera which traces its lineage to the focal point of the rebellion. Dated Tuesday April 25 1916, on the last page of the first issue of Irish War News it is an article headed:

“Stop Press (Irish) ‘War News’ is published to-day because a momentous thing has happened. The Irish Republic has been declared in Dublin and a Provisional Government has been appointed to administer it is affairs.”

 The article goes on to name the signatories of the proclamation as the Provisional Government while outlining the situation in Dublin from the rebel prospective.       

Digital
The Rising, or more particularly the centenary of the events in Dublin a hundred years ago, is being explored and represented in new ways thanks to technology and the work of colleagues at Trinity College Dublin and the Bodleian Library Oxford. In the last year they have built and curated a collection of websites related to the commemoration.

These have been archived as part of the open UK Web Archive.  To have the opportunity to build this collection of Irish and UK websites is an exciting prospect for the future of web published content. This endeavour illustrates how the internet is not confined by national boundaries. The work on the Easter Rising collection exemplifies how archivists working together can build a contemporary collection which provides a range of perspectives from all corners of the .uk and .ie domains.   

Archiving websites about anniversaries and centenaries such as Easter 1916 is of prime importance because such sites can be transient and are soon overwritten or taken down. Archiving them creates a research resource for the future which offers scholars and anyone interested the opportunity to explore and examine the response to this centenary on the published web.

The Easter Rising collection is currently a growing part of the UK Web Archive special collections where it can be freely consulted online.

By Jeremy Jenkins, Curator Emerging Media, The British Library
@_jerryjenkins

 

Further Reading

Bouch, Joseph J. “The Republican Proclamation of Easter Monday, 1916,” Bibliographical Society of Ireland, Publications vol.5. no.3 1936. General Reference Collection: Ac.9708/2 [A reissue].

The Easter Proclamation of the Irish Republic, MCMXVI
Dublin : Dolmen Press, 1960. General Reference Collection: Cup.510.ak.37

The Easter Proclamation of the Irish Republic 1916,
[S.l.] : Dolmen Press, 1976. Document Supply Shelfmark: D76/23312

 

 

15 February 2016

Introducing SHINE 2.0 - A Historical Search Engine

Add comment Comments (0)

In 2015, as part of the Big UK Domain Data for the Arts and Humanities project, we released our first ‘historical search engine’ service. We’ve publicised it at IDCC15, the 2015 IIPC GA and at the first RESAW conference, and so far has been very well received. Not only has it lead to some excellent case studies that we can use to improve our services, but other web archives have shown interest in re-using the underlying open source code. In particular, some of our Canadian colleagues have successfully launched webarchives.ca, which lets users search ten years worth of archived websites from Canadian political parties and political interest groups (see here for more details).

Even bigger data!
But we remained frustrated for two reasons. Firstly, when we built that first service, we could not cope with the full scale of the 1996-2013 dataset, and we only managed to index the two billion resources up to 2010. Secondly, we had not yet learned how to cope with more than one or two users at a time, so we were loath to publicise the website too widely in case it crashed. So, over the last six months, and with the guidance of Toke Eskildsen and Thomas Egense at the State Library of Denmark, we’ve been working on resolving these scaling issues (their tech blog is definitely worth a look if you’re into this kind of thing).

Thanks to their input, I’m happy to announce that our historical search prototype now spans the whole period from 1996 to the 6th April 2013, and contains 3,520,628,647 distinct records.

Shine-release-two-total-resources-over-time

Broken down by year, you can see there’s a lot of variation, depending on the timings of the global crawls from which this collection was drawn. This is why our trends visualisation plots query results as a percentage of all the resources crawled in each year rather than absolute figures. However, the overall variation and the fact that the 2013 chunk only covers the first three months should be kept in mind when interpreting the results.

Time travel?
You might also notice there seem to be a few data points from as early as 1938, and even from 2072! This tiny proportion of results correspond to malformed or erroneous records, although currently it’s not clear if the 1,714 results from 1995 are genuine or not. No one ever said Big Data would be Clean Data.

De-duplication of records
Furthermore, we’ve decided to change the way we handle web archiving records that have been ‘de-duplicated’. When the crawler visits a page and finds precisely the same item as before, instead of storing another copy, we can store a so-called “revisit record” and refer to the earlier copy rather than duplicating it. This crude form of data compression can save a lot of disk space for frequently crawled material, and it’s use has grown over time. For example, looking at the historical dataset, you can see that 30% of the 2013 results were duplicates.

Shine-release-two-revisits

However, as these records don’t hold the actual item, our indexing process was not able to index these items properly. Over the next few weeks, we shall scan through these 65 million revisit records and ‘reduplicate’ them. This does mean that, for now, the results from 2013 might be a bit misleading in some cases. We also failed to index the last 11,031 of the 515,031 WARC files that make up this dataset (about 2% of the total, likely affecting the 2010-2013 results only), simply because we ran out of disk space. The index is using up 18.7TB of SSD storage, and if we can find more space, we’ll fill in the rest.

Do try it at home
In the meantime, please explore our historical archive and tell us what you find! It might be slow sometimes (maybe 10-20 seconds), so please be patient, but we’re pretty confident that it will be stable from now on.

Shine-release-two-early-social-media

Shine-release-two-later-social-media

Shine-release-two-austerity

https://www.webarchive.org.uk/shine

By Andy Jackson, British Library Web Archiving Technical Lead

20 November 2015

The Provenance of Web Archives

Add comment Comments (0)

Over the last few years, it’s been wonderful to see more and more researchers taking an interest in web archives. Perhaps we are even teetering into the mainstream when a publication like Forbes carries an article digging into the gory details of how we should document our crawls in How Much Of The Internet Does The Wayback Machine Really Archive?

Even before the data-mining BUDDAH project raised these issues, we’d spent a long time thinking about this, and we’ve tried to our best to capture as much of our own crawl context as we can. We don’t just store the WARC request and response records (which themselves are much better at storing crawl context than the older ARC format), we also store:

  • The list of links that the crawler found when it analysed each resource (this is a standard Heritrix3 feature).
  • The full crawl log, which records DNS results and other situations that may not be reflected in the WARCs.
  • The crawler configuration, including seed lists, scope rules, exclusions etc.
  • The versions of the software we used (in WARC Info records and in the PREMIS/METS packaging).
  • Rendered versions of original seeds and home pages, as PNG and as HTML, and associated metadata.

In principle, we believe that the vast majority of questions about how and why a particular resource has been archived can be answered by studying this additional information. However, it’s not clear how this would really work in practice. Even assuming we have caught the most important crawl information, reconstructing the history behind any particular URL is going to be highly technical and challenging work because you can’t really understand the crawl without understanding the software (to some degree at least).

But there are definitely gaps that remain - in particular, we don’t document absences well. We don’t explicitly document precisely why certain URLs were rejected from the crawl, and if we make a mistake and miss a daily crawl, or mis-classify a site, it’s hard to tell the difference between accident and intent from the data. Similarly, we don’t document every aspect of our curatorial decisions, e.g. precisely why we choose to pursue permissions to crawl specific sites that are not in the UK domain. Capturing every mistake, decision or rationale simply isn’t possible, and realistically we’re only going to record information when the process of doing so can be largely or completely automated (as above, see also You get what you get and you don’t get upset).

And this is all just at the level of individual URLs. When performing corpus analysis, things get even more complex because crawl configurations vary within the crawls and change over time. Right now, it’s not at all clear how best to combine or summarize fine-grained provenance information in order to support data-mining and things like trend analysis. But, in the context of working on the Buddha project, we did start to explore how this might work.

For example, the Forbes article brings up the fact that crawl schedules vary, and so not every site has been crawled consistently, e.g. every day. Of course, we found exactly the same kind of thing when building the Shine search interface, and this is precisely why our trend graphs currently summarize the trends by year. In other words, if you average the crawled pages by year, you can wash out the short-lived variations. Of course, large crawls can last months, so really you want to be able to switch between different sampling parameters (quarterly, six-monthly, or annual, starting at any point in the year, etc.), so that you can check whether any perceptible trend may be a consequence of the sampling strategy (not that we got as far as implementing that, yet).

"Global Financial Crisis"

Similarly, notice that Shine shows you the percentage of matching resources by year, rather than the absolute number of matching documents. This is because showing the fraction of the crawled web that matches your query is generally more useful than just the number of matching resources because in the latter case the crawl scheduling tends to obscure what’s going on (again, it would be even better to be able to switch between the two so you can better understand what any given trend means, although if you download the data for the graph you get the absolute figures as well as the relative ones).

More useful still would be the ability to pick any other arbitrary query to be the normalization baseline, so you could plot matching words against total number of words per year, or matching links per total number of links, and so on. The crucial point is that if your trend is genuine, you can use sampling and normalization techniques to test that, and to find or rule out particular kinds of biases within the data set.

This is also why the trend interface offers to show you a random sample of the results underlying a trend. For example, it makes it much easier to quickly ascertain whether the apparent trend is due to a large number of false-positive hits coming from a small number of hosts, thus skewing the data.

I believe there will be practical ways of summarizing provenance information in order to describe the systematic biases within web archive collections, but it’s going to take a while to work out how to do this, particularly if we want this to be something we can compare across different web archives. My suspicion is that this will start from the top and work down - i.e. we will start by trying different sampling and normalization techniques, and discover what seems to work, then later on we’ll be able to work out how this arises from the fine details of the crawling and curation processes involved.

So, while I hope it is clear that I agree with the main thrust of the article, I must admit I am a little disappointed by its tone.

If the Archive simply opens its doors and releases tools to allow data mining of its web archive without conducting this kind of research into the collection’s biases, it is clear that the findings that result will be highly skewed and in many cases fail to accurately reflect the phenomena being studied.

Kalev Leetaru, How Much Of The Internet Does The Wayback Machine Really Archive?

The implication that we should not enable access to our collections until we have deduced it’s every bias is not at all constructive (and if it inhibits other organisations from making their data available, potentially quite damaging).

No corpus, digital or otherwise, is perfect. Every archival sliver can only come to be understood through use, and we must open up to and engage with researchers in order to discover what provenance we need and how our crawls and curation can be improved.

There are problems we need to document, certainly. Our BUDDAH project is using Internet Archive data, so none of the provenance I listed above was there to help us. And yes, when providing access to the data we do need to explain the crawl dynamics and parameters - you need to know that most of the Internet Archive crawls omit items over 10MB in size (see e.g.here), that they largely obey robots.txt (which is often why mainstream sites are missing), and that right now everyone’s harvesting processes are falling behind the development of the web.

But researchers can’t expect the archives to already know what they need to know, or to know exactly how these factors will influence your research questions. You should expect to have to learn why the dynamics of a web crawler mean that any data-mined ranking is highly unlikely to match up the popularity as defined by Alexa (which is based on web visitors rather than site-to-site links). You should expect to have to explore the data to test for biases, to confirm the known data issues and to help find the unknown ones.

“Know your data” applies to both of us. Meet us half way.

What we do lack, perhaps, is an adequate way to aggregating these experiences so new researchers do not have to waste time re-discovering and re-learning these things. I don’t know exactly what this would look like, but the IIPC Web Archiving Conferences provide a strong starting point and a forum to take these issues forward.

By Andy Jackson, Web Archive Technical Lead, The British Library

30 October 2015

Who is best - Cats or Dogs?

Add comment Comments (0)

Thursday 29 October was #NationalCatDay so the UK Web Archive have taken the opportunity to answer the BIG question that everyone is asking – are cats better than dogs? It is a rivalry as old as time itself and whilst it might be tricky to empirically say who is ‘best’ we can prove who is the most popular in the UK web space.

Using the SHINE interface we can look at trends in all of the .uk websites, based on the number of pages that a certain term is used over the years 1996-2013.

We want to be sure to capture as many cat and dog references as possible so the following term is a good start: cat OR kitten OR moggy OR kitty, dog OR puppy OR mutt

And the winner is [drumroll]…….

CatVSdog04

CATS!

That casual air of superiority that cats have appears to be fully justified.

Also, in 2005, in what we are now calling ‘Peak Cat’, pages with a mention of cats accounted for 4.5% of the ENTIRE .uk domain, as captured by the Internet Archive. Yes indeed, the humble moggy is popular with humans.

Try your own trend analysis: https://www.webarchive.org.uk/shine

By Jason Webber, Web Archiving Engagement and Liaison Manager

16 October 2015

Playing at Web Archiving

Add comment Comments (0)

A few months ago, a colleague suggested that we should come up with ways of helping people learn about the main stages of web archiving, and to help them understand some of the more common technical terminology.

I got a bit carried away…

…because at the same time, I’d been hearing a lot about Twine and about the interactive fiction that people can build using it. So, I thought, why not use a interactive fiction engine to built a ‘web archiving simulator’ that takes you through the core web archiving life-cycle? A way to ‘learn by doing’ without having all the baggage involved in doing it for real?

Well, because it’ll suck up a tonne of time learning about Twine and twinery.org and the two different versions and fiddling about with the structure and with the prose…

Editing the Twine

After a few evenings I ran out of steam, and the experiment has been sitting in browser tab since then, unfinished.

I enjoyed building it, but it’s really not going to get finished any time soon. I’m not even sure what ‘finished’ would look like any more. So I may as well publish it as it is. If you want to play the game of web archiving, click the link below…

Understanding Web Archiving

I’ve also made the source export available, which you should be able to upload at twinery.org if you want to extend it or just see how it works.

Let me know what you think!

Andy Jackson, British Library Web Archiving Technical Lead

x-post from http://anjackson.net/2015/08/19/web-archiving-twine/