THE BRITISH LIBRARY 

UK Web Archive blog

Information from the team at the UK Web Archive, the Library's premier resource of archived UK websites

Introduction

News and views from the British Library’s web archiving team and guests. Posts about the public UK Web Archive, and since April 2013, about web archiving as part as non-print legal deposit. Editor-in-chief: Jason Webber. Read more

26 April 2016

Easter Rising 1916 Centenary in Print and Digital

Ireland has been gripped by  commemorations of the Easter Rising in the last month. The Rising took place from the 24th April to the 29th April 1916 in Dublin. A packed programme of events and activities took place across Ireland and in Irish communities further afield to commemorate this centenary.

In March 2016, addressing a colloquium at the Bodleian Library, Oxford, the Irish Ambassador to the United Kingdom, his Excellency Daniel Mulhall, emphasised the transnational and inclusive nature of the commemoration programme in his opening remarks. The 1916 Rising had a global impact with ripples felt as far as Asia and India. This is reflected in the range of events taking place in the United Kingdom, supported by the Irish Embassy.

In military terms the Rising was a failure and had consequences for the people of Dublin with 415 people killed, the majority of whom were civilians.

Print
Turning to the documentation of the Rising, there are a number of interesting documents within the Library’s collections relating to the Rising. The British Library does not hold an original broadside of the Proclamation of an Irish Republic. Nevertheless, later examples of the document were acquired retrospectively.

The earliest example of a version of the proclamation in the British Library’s collections, can be found at C.S.A.24/3.(1.). This is interesting from a bibliographical stand point because it is the first entry under the new heading in the British Library Printed Catalogue to 1975:

ProvisionalGovernmentEntryBLPC

Provisional Government of the Irish Republic 1916. Miscellaneous Public documents. 

That the Library classified this proclamation as a public document and gave the document the C.S.A., official publication pressmark prefix, which originates from the 1890s, is of particular interest.  The third factor which is of interest is that this version of the proclamation is the only item in the green bound guard-book which is embossed on the spine in gold.

Poblacht na heireann1916

IRELAND. PROCLAMATIONS, ETC.

Although the red (purchase) stamp appears on the reverse of the document, because of the way it has been mounted in the volume it is unclear when the item was acquired. It appears to read 15 May ‘59. The volume itself bears the British Museum binders stamp B.M.1961 on the inside of the rear board. These dates indicate that this item, as with other ephemera relating to 1916 Rebellion, was acquired retrospectively. 

Poblacht na heireann 1941

The second example of the proclamation is a more ornate affair. It is a single sheet dating from 1941, measuring approximately 325mm x 255mm. The text of the document is laid out in the same fashion as the original, but the type face has been standardised, removing the anomalies from the original, and the list of signatories has been centred rather than justified to the right as in the original. What is most striking about this item are the portraits of the seven signatories surrounding the text and connected by the decorative boarder. At the bottom centre surround in a circle is the Irish Army sunburst emblem, designed by Eion MacNeill, and interestingly it is reproduced without the inscription "Óglaigh na hÉireann" or Irish Volunteers.

Irish War News Irish War News p4

The third document is a piece of contemporary ephemera which traces its lineage to the focal point of the rebellion. Dated Tuesday April 25 1916, on the last page of the first issue of Irish War News it is an article headed:

“Stop Press (Irish) ‘War News’ is published to-day because a momentous thing has happened. The Irish Republic has been declared in Dublin and a Provisional Government has been appointed to administer it is affairs.”

 The article goes on to name the signatories of the proclamation as the Provisional Government while outlining the situation in Dublin from the rebel prospective.       

Digital
The Rising, or more particularly the centenary of the events in Dublin a hundred years ago, is being explored and represented in new ways thanks to technology and the work of colleagues at Trinity College Dublin and the Bodleian Library Oxford. In the last year they have built and curated a collection of websites related to the commemoration.

These have been archived as part of the open UK Web Archive.  To have the opportunity to build this collection of Irish and UK websites is an exciting prospect for the future of web published content. This endeavour illustrates how the internet is not confined by national boundaries. The work on the Easter Rising collection exemplifies how archivists working together can build a contemporary collection which provides a range of perspectives from all corners of the .uk and .ie domains.   

Archiving websites about anniversaries and centenaries such as Easter 1916 is of prime importance because such sites can be transient and are soon overwritten or taken down. Archiving them creates a research resource for the future which offers scholars and anyone interested the opportunity to explore and examine the response to this centenary on the published web.

The Easter Rising collection is currently a growing part of the UK Web Archive special collections where it can be freely consulted online.

By Jeremy Jenkins, Curator Emerging Media, The British Library
@_jerryjenkins

 

Further Reading

Bouch, Joseph J. “The Republican Proclamation of Easter Monday, 1916,” Bibliographical Society of Ireland, Publications vol.5. no.3 1936. General Reference Collection: Ac.9708/2 [A reissue].

The Easter Proclamation of the Irish Republic, MCMXVI
Dublin : Dolmen Press, 1960. General Reference Collection: Cup.510.ak.37

The Easter Proclamation of the Irish Republic 1916,
[S.l.] : Dolmen Press, 1976. Document Supply Shelfmark: D76/23312

 

 

15 February 2016

Introducing SHINE 2.0 - A Historical Search Engine

Add comment Comments (0)

In 2015, as part of the Big UK Domain Data for the Arts and Humanities project, we released our first ‘historical search engine’ service. We’ve publicised it at IDCC15, the 2015 IIPC GA and at the first RESAW conference, and so far has been very well received. Not only has it lead to some excellent case studies that we can use to improve our services, but other web archives have shown interest in re-using the underlying open source code. In particular, some of our Canadian colleagues have successfully launched webarchives.ca, which lets users search ten years worth of archived websites from Canadian political parties and political interest groups (see here for more details).

Even bigger data!
But we remained frustrated for two reasons. Firstly, when we built that first service, we could not cope with the full scale of the 1996-2013 dataset, and we only managed to index the two billion resources up to 2010. Secondly, we had not yet learned how to cope with more than one or two users at a time, so we were loath to publicise the website too widely in case it crashed. So, over the last six months, and with the guidance of Toke Eskildsen and Thomas Egense at the State Library of Denmark, we’ve been working on resolving these scaling issues (their tech blog is definitely worth a look if you’re into this kind of thing).

Thanks to their input, I’m happy to announce that our historical search prototype now spans the whole period from 1996 to the 6th April 2013, and contains 3,520,628,647 distinct records.

Shine-release-two-total-resources-over-time

Broken down by year, you can see there’s a lot of variation, depending on the timings of the global crawls from which this collection was drawn. This is why our trends visualisation plots query results as a percentage of all the resources crawled in each year rather than absolute figures. However, the overall variation and the fact that the 2013 chunk only covers the first three months should be kept in mind when interpreting the results.

Time travel?
You might also notice there seem to be a few data points from as early as 1938, and even from 2072! This tiny proportion of results correspond to malformed or erroneous records, although currently it’s not clear if the 1,714 results from 1995 are genuine or not. No one ever said Big Data would be Clean Data.

De-duplication of records
Furthermore, we’ve decided to change the way we handle web archiving records that have been ‘de-duplicated’. When the crawler visits a page and finds precisely the same item as before, instead of storing another copy, we can store a so-called “revisit record” and refer to the earlier copy rather than duplicating it. This crude form of data compression can save a lot of disk space for frequently crawled material, and it’s use has grown over time. For example, looking at the historical dataset, you can see that 30% of the 2013 results were duplicates.

Shine-release-two-revisits

However, as these records don’t hold the actual item, our indexing process was not able to index these items properly. Over the next few weeks, we shall scan through these 65 million revisit records and ‘reduplicate’ them. This does mean that, for now, the results from 2013 might be a bit misleading in some cases. We also failed to index the last 11,031 of the 515,031 WARC files that make up this dataset (about 2% of the total, likely affecting the 2010-2013 results only), simply because we ran out of disk space. The index is using up 18.7TB of SSD storage, and if we can find more space, we’ll fill in the rest.

Do try it at home
In the meantime, please explore our historical archive and tell us what you find! It might be slow sometimes (maybe 10-20 seconds), so please be patient, but we’re pretty confident that it will be stable from now on.

Shine-release-two-early-social-media

Shine-release-two-later-social-media

Shine-release-two-austerity

https://www.webarchive.org.uk/shine

By Andy Jackson, British Library Web Archiving Technical Lead

20 November 2015

The Provenance of Web Archives

Add comment Comments (0)

Over the last few years, it’s been wonderful to see more and more researchers taking an interest in web archives. Perhaps we are even teetering into the mainstream when a publication like Forbes carries an article digging into the gory details of how we should document our crawls in How Much Of The Internet Does The Wayback Machine Really Archive?

Even before the data-mining BUDDAH project raised these issues, we’d spent a long time thinking about this, and we’ve tried to our best to capture as much of our own crawl context as we can. We don’t just store the WARC request and response records (which themselves are much better at storing crawl context than the older ARC format), we also store:

  • The list of links that the crawler found when it analysed each resource (this is a standard Heritrix3 feature).
  • The full crawl log, which records DNS results and other situations that may not be reflected in the WARCs.
  • The crawler configuration, including seed lists, scope rules, exclusions etc.
  • The versions of the software we used (in WARC Info records and in the PREMIS/METS packaging).
  • Rendered versions of original seeds and home pages, as PNG and as HTML, and associated metadata.

In principle, we believe that the vast majority of questions about how and why a particular resource has been archived can be answered by studying this additional information. However, it’s not clear how this would really work in practice. Even assuming we have caught the most important crawl information, reconstructing the history behind any particular URL is going to be highly technical and challenging work because you can’t really understand the crawl without understanding the software (to some degree at least).

But there are definitely gaps that remain - in particular, we don’t document absences well. We don’t explicitly document precisely why certain URLs were rejected from the crawl, and if we make a mistake and miss a daily crawl, or mis-classify a site, it’s hard to tell the difference between accident and intent from the data. Similarly, we don’t document every aspect of our curatorial decisions, e.g. precisely why we choose to pursue permissions to crawl specific sites that are not in the UK domain. Capturing every mistake, decision or rationale simply isn’t possible, and realistically we’re only going to record information when the process of doing so can be largely or completely automated (as above, see also You get what you get and you don’t get upset).

And this is all just at the level of individual URLs. When performing corpus analysis, things get even more complex because crawl configurations vary within the crawls and change over time. Right now, it’s not at all clear how best to combine or summarize fine-grained provenance information in order to support data-mining and things like trend analysis. But, in the context of working on the Buddha project, we did start to explore how this might work.

For example, the Forbes article brings up the fact that crawl schedules vary, and so not every site has been crawled consistently, e.g. every day. Of course, we found exactly the same kind of thing when building the Shine search interface, and this is precisely why our trend graphs currently summarize the trends by year. In other words, if you average the crawled pages by year, you can wash out the short-lived variations. Of course, large crawls can last months, so really you want to be able to switch between different sampling parameters (quarterly, six-monthly, or annual, starting at any point in the year, etc.), so that you can check whether any perceptible trend may be a consequence of the sampling strategy (not that we got as far as implementing that, yet).

"Global Financial Crisis"

Similarly, notice that Shine shows you the percentage of matching resources by year, rather than the absolute number of matching documents. This is because showing the fraction of the crawled web that matches your query is generally more useful than just the number of matching resources because in the latter case the crawl scheduling tends to obscure what’s going on (again, it would be even better to be able to switch between the two so you can better understand what any given trend means, although if you download the data for the graph you get the absolute figures as well as the relative ones).

More useful still would be the ability to pick any other arbitrary query to be the normalization baseline, so you could plot matching words against total number of words per year, or matching links per total number of links, and so on. The crucial point is that if your trend is genuine, you can use sampling and normalization techniques to test that, and to find or rule out particular kinds of biases within the data set.

This is also why the trend interface offers to show you a random sample of the results underlying a trend. For example, it makes it much easier to quickly ascertain whether the apparent trend is due to a large number of false-positive hits coming from a small number of hosts, thus skewing the data.

I believe there will be practical ways of summarizing provenance information in order to describe the systematic biases within web archive collections, but it’s going to take a while to work out how to do this, particularly if we want this to be something we can compare across different web archives. My suspicion is that this will start from the top and work down - i.e. we will start by trying different sampling and normalization techniques, and discover what seems to work, then later on we’ll be able to work out how this arises from the fine details of the crawling and curation processes involved.

So, while I hope it is clear that I agree with the main thrust of the article, I must admit I am a little disappointed by its tone.

If the Archive simply opens its doors and releases tools to allow data mining of its web archive without conducting this kind of research into the collection’s biases, it is clear that the findings that result will be highly skewed and in many cases fail to accurately reflect the phenomena being studied.

Kalev Leetaru, How Much Of The Internet Does The Wayback Machine Really Archive?

The implication that we should not enable access to our collections until we have deduced it’s every bias is not at all constructive (and if it inhibits other organisations from making their data available, potentially quite damaging).

No corpus, digital or otherwise, is perfect. Every archival sliver can only come to be understood through use, and we must open up to and engage with researchers in order to discover what provenance we need and how our crawls and curation can be improved.

There are problems we need to document, certainly. Our BUDDAH project is using Internet Archive data, so none of the provenance I listed above was there to help us. And yes, when providing access to the data we do need to explain the crawl dynamics and parameters - you need to know that most of the Internet Archive crawls omit items over 10MB in size (see e.g.here), that they largely obey robots.txt (which is often why mainstream sites are missing), and that right now everyone’s harvesting processes are falling behind the development of the web.

But researchers can’t expect the archives to already know what they need to know, or to know exactly how these factors will influence your research questions. You should expect to have to learn why the dynamics of a web crawler mean that any data-mined ranking is highly unlikely to match up the popularity as defined by Alexa (which is based on web visitors rather than site-to-site links). You should expect to have to explore the data to test for biases, to confirm the known data issues and to help find the unknown ones.

“Know your data” applies to both of us. Meet us half way.

What we do lack, perhaps, is an adequate way to aggregating these experiences so new researchers do not have to waste time re-discovering and re-learning these things. I don’t know exactly what this would look like, but the IIPC Web Archiving Conferences provide a strong starting point and a forum to take these issues forward.

By Andy Jackson, Web Archive Technical Lead, The British Library