Digital scholarship blog

Enabling innovative research with British Library digital collections

207 posts categorized "Experiments"

17 March 2015

BL Labs Competition and Awards Roadshow 2015

Mahendra Mahey, Manager of BL Labs
Closing date for Competition: Thursday 30th of April, 2015
Closing date for Award: Monday 14th of September 2015

The 2015 BL Labs Competition has been launched for the third time and again we want researchers to submit their ideas for projects that highlight the Library’s digital collections and winners work in residence with the Labs team to make their ideas real, please help us spread the word!

Previous finalists of the BL Labs Competition have helped us learn more about:

We have seen an amazing range of creative and innovative ideas in entries in 2013 and 2014 and we look forward to seeing even more in 2015! Winners will be chosen by Friday 29th of May 2015.

In addition, we are launching the new 2015 BL Labs Awards for outstanding work that has already been completed using British Library digital content. We are looking for examples in the categories of Research, Creativity, and Entrepreneurship. Shortlisted candidates will be informed by Monday 12th October 2015.

Competition winners will showcase their work and Award winners will be announced at the third Labs Symposium on Monday 2 November, 2015 in the British Library Conference Centre.

We organising a number of roadshows around the country promoting the competition, for more information and to register please see:

Contact us at [email protected] or visit http://labs.bl.uk/Events

 

29 January 2015

Picaguess: a prototype crowdsourcing app from the British Library Big Data Experiment

The British Library Big Data Experiment is an ongoing collaboration between British Library Digital Research and UCL Department of Computer Science (UCLCS), facilitated by UCL Centre for Digital Humanities (UCLDH), that enables and engages students in computer science with humanities research issues as part of their core assessed work.

All taught undergraduate and postgraduate programmes in UCLCS require students to undertake an industry exchange where they work in teams as clients to an industry partner. Though UCLCS has experience of developing student projects in partnership with digital humanists, industry partners have tended to come from the financial or manufacturing sectors. The British Library Big Data Experiment is an umbrella for a series of activities where the British Library is the client for assessed UCLCS project work, allowing for a rolling, responsive program of experimental design, development, and testing of infrastructure and systems. Those wanting to find out more about the British Library Big Data Experiment should look out for our forthcoming poster at DH2015 (which will - of course - be posted online).

Pic

The latest project to come out of this collaboration is Picaguess, a web and Android application developed by Jonathan Lloyd, Meral Sahin, and Divya Surendran, all of whom are studying for an MSc at UCLCS. Picaguess is an image guessing game that examines your play to help the British Library learn about our digital collections. It uses a Draw Something like mechanic to enable structured linking between the one million Public Domain book illustrations the British Library released onto Flickr in December 2013. Initially the only information we knew about these images was their size and in which book and on which page they appeared. Over time the community has added tens of thousands of semantic tags to the collection, effort which has proven enormously valuable in deepening our understanding of the collection (a dataset for all tags from December 2013 to December 2014 is available for on Figshare for unrestricted use and reuse). However this sort of free-text tagging can introduce problems and idiosyncrasies. PicaGuess aims to compliment the Flickr tag data by using a set of category words to drive community tagging. These category words are both of a descriptive and a more abstract variety (so 'dark', 'tender', and 'labour' as well as the usual 'map', 'building', 'person') and underpin two user distinct interactions with the collection. These are:

  • creating sets of four images representative of a category words. The user chooses from three category words, then browses through the image set to find suitable images for that word.
  • guessing the category word that another user has assigned to a four image set. The user is presented with the four images and a selection of letters (with some extra letters thrown in to make it more tricky) from which the category word can be spelt. If stuck users can ask for a hint or give up all together.

At the back end, confidence scores for the relationship between a word and an image are updated based on the time it takes a user to solve a game, whether or not they need a hint to do so, and whether they fail to do so. Together then these two simple games provide a platform for the distributed determination of links between illustrations where the parameters can be controlled by the collection 'owner' and rich data accompany those links.

Pic2

You can find PicaGuess at picaguess.herokuapp.com, from where you can download the associated Android application. All code from the project is available for inspection and reuse on GitHub and periodically we hope to release the confidence scores for use and reuse.

In line with the spirit of the British Library Big Data Experiment, PicaGuess is not an official British Library service but very much a prototype that we hope can be improved and refined over time. And so we encourage you to share, to comment (here or via an email to [email protected]), and to build on the hard work, creativity, and achievements of the Jonathan, Meral, and Divya.

Next up for the British Library Big Data Experiment is a machine learning project. More here when there is something to share.

James Baker, Curator, Digital Research

@j_w_baker

08 January 2015

Help #bldigital to help you do better digital research

The Jisc Research Data Spring is a project that aims to find tools, software, and service solutions that will improve how researchers work, in particular how they use and manage data.

The British Library Digital Research team are confident that infrastructures that deliver flexible and scalable access to large digital collections as data can enable better research. Last year we spoke about this at Digital Humanities 2014 (Farquhar, Adam; Baker, James (2014): Interoperable Infrastructures for Digital Research: A proposed pathway for enabling transformation. figshare. dx.doi.org/10.6084/m9.figshare.1092550) and we continue to work with student teams at UCL Computer Science to experiment with platforms for access to and interrogation of British Library digital collections.

Building on these activities, we are involved in two initial project proposals for the Jisc Research Data Spring:

'Dissecting digital humanities data with biomedical tools' is a collaboration with the School of Social and Community Medicine, University of Bristol, and seeks to adapt DataSHIELD, originally developed to co-analyse numerical patient data from different sources without disclosing identity or sensitive information, to include a proof-of-concept for supporting a range of text analyses across datasets that present divergent challenges to access and interpretation. In short, whether the barrier to computational text analysis is ethical-legal, IP or licensing related, or just the physical size of the data this project will be a step towards helping you work across those data and derive from your analysis meaningful, high-level, comparative results. For more on DataSHIELD see 'DataSHIELD: taking the analysis to the data, not the data to the analysis' (2014).

'Enabling complex analysis of large scale digital collections'  is a collaboration with UCL Centre for Digital Humanities and UCL Research Computing that seeks to use the resources of the latter in combination with our data to investigate the needs and requirements of a service that would allow researchers to undertake complex searches of digital content. By enabling both the research community and the public to propose problems and taken an active role in understanding how those problems are translated into complex queries that UCL Research Computing could perform on the data, the project aims to generate better understanding of the demands in the processing of large scale cultural data and inform us about user requirements in reusing, analysing, and facilitating searches of digital content.

For these discrete but complimentary projects to become reality, we need your help. For only by you commenting on and voting for the projects on IdeaScale (head to pages for 'Dissecting digital humanities data with biomedical tools' and 'Enabling complex analysis of large scale digital collections' respectively) can they advance to the next stage and perhaps - eventually - secure substantial funding.

James Baker

Curator, Digital Research

@j_w_baker

22 October 2014

Victorian Meme Machine - Extracting and Converting Jokes

Posted on behalf of Bob Nicholson.

The Victorian Meme Machine is a collaboration between the British Library Labs and Dr Bob Nicholson (Edge Hill University). The project will create an extensive database of Victorian jokes and then experiment with ways to recirculate them over social media. For an introduction to the project, take a look at this blog post or this video presentation.

1 - intro image

In my previous blog post I wrote about the challenge of finding jokes in nineteenth century books and newspapers. There’s still a lot of work to be done before we have a truly comprehensive strategy for identifying gags in digital archives, but our initial searches scooped up a lot of low-hanging fruit. Using a range of keywords and manual browsing methods we quickly managed to identify the locations of more than 100,000 gags. In truth, this was always going to be the easy bit. The real challenge lies in automatically extracting these jokes from their home-archives, importing them into our own database, and then converting them into a format that we can broadcast over social media.

Extracting joke columns from the 19th Century British Library Newspaper Archive – the primary source of our material – presents a range of technical and legal obstacles. On the plus side, the underlying structure of the archive is well-suited to our purposes. Newspaper pages have already been broken up into individual articles and columns, and the XML for each these articles includes an ‘Article Title’ field. As a result, it should theoretically be possible to isolate every article with the title “Jokes of the Day” and then extract them from the rest of the database. When I pitched this project to the BL Labs, I naïvely thought that we’d be able to perform these extractions in a matter of minutes – unfortunately, it’s not that easy. 

1-5 -joke_syntaxMarking up a joke with tags

The archive’s public-facing platform is owned and operated by the commercial publisher Gale Cengage, who sells subscriptions to universities and libraries around the world (UK universities currently get free access via JISC). Consequently, access to the archive’s underlying content is restricted when using this interface. While it’s easy to identify thousands of joke columns using the archive’s search tools, it isn’t possible to automatically extract all of the results. The interface does not provide access to the underlying XML files, and images can only be downloaded one-by-one using a web browser’s ‘save image as’ button. In other words, we can’t use the commercial interface to instantly grab the XML and TIFF files for every article with the phrase “Jokes of the Week” in its title.

The British Library keeps its own copies these files, but they are currently housed in a form of digital deep-storage that is impossible for researchers to directly access and extremely cumbersome to discover content within it. In order to move forward with the automatic extraction of jokes we will need to secure access to this data, transfer it onto a more accessible internal server, custom build an index that allows us to search the full text of the articles and titles so that we may extract all of the relevant text and image files showing the areas of the newspaper scans from which the text was derived.

All of this is technically possible, and I’m hopeful that we’ll find a way to do it in the next stage of the project. However, given the limited time available to us we decided to press ahead with a small sample of manually extracted columns and focus our attention on the next stages of the project. This manually created sample will be of great use in future, as we and other research groups can use it to train computer models, which should enable us to automatically classify text from other corpora as potentially containing jokes that we would not have been able to find otherwise.

For our sample we manually downloaded all of the ‘Jokes of the Day’ columns published by Lloyd’s Weekly News in 1891. Here’s a typical example:

2 - joke column

These columns contain a mixture of joke formats – puns, conversations, comic stories, etc – and are formatted in a way that makes them broadly representative of the material found elsewhere in the database. If we can find a way to process 1,000 jokes from this source, we shouldn’t have too much difficulty scaling things up to deal with 100,000 similar gags from other newspapers.    

Our sample of joke columns was downloaded as a set of jpeg images. In order to make them keyword searchable, transform them into ‘memes’, and send them out over social media we first need to convert them into accurate, machine-readable text. We don’t have access to the existing OCR data, but even if this was available it wouldn’t be accurate enough for our purposes. Here’s an example of how one joke has been interpreted by OCR software:

  3 - OCR comparison
Some gags have been rendered more successfully than this, but many are substantially worse. Joke columns often appeared at the edge of a page, which makes them susceptible to fading and page bending. They also make use of unusual punctuation, which tends to confuse the scanning software. Unlike newspaper archives, which remain functional even with relatively low-quality OCR, our project requires 100% accuracy (or something very close) in order to republish the jokes in new formats.

So, even if we had access to OCR data we’d need to correct and improve it manually. We experimented with this process using OCR data taken from the British Newspaper Archive, but the time it took to identify and correct errors turned out to be longer than transcribing the jokes from scratch. Our volunteers reported that the correction process required them to keep looking back and forth between the image and the OCR in order to correct errors one-by-one, whereas typing up a fresh transcription was apparently quick and straightforward. It seems a shame to abandon the OCR, and I’m hopeful that we’ll eventually find a way to make it usable. The imperfect data might work as a stop-gap to make jokes searchable before they are manually corrected. We may be able to improve it using new OCR software, or speed up the correction process by making use of interface improvements like TILT. However, for now, the most effective way to convert the jokes into an accurate, machine-readable format is simply to transcribe directly from the image.

26 September 2014

Applying Forensics to Preserving the Past: Current Activities and Future Possibilities

First Digital Lives Research Workshop 2014 at the British Library
 


DL Workshop Holder
 

 

With more and more libraries, archives and museums manifestly adopting forensic approaches and tools for handling and processing born digital objects both in the UK and overseas it seemed a good time to take stock. Archivists and curators were invited (via professional email listservs) to submit a short paper for an inclusive and interactive workshop stretching over two days in London. 

Institutions are applying digital forensics across the entire lifecycle from appraisal through to content analysis, and have begun to establish workflows that embrace forensic techniques such as the use of write blockers for the formation of disk images, the extraction of metadata and the searching,  filtering and interpreting of digital data, notably the appropriate management of sensitive information. 

There are two sides to digital forensics for it begins with the protection of digital evidence and concludes with the retrospective analysis of past events and objects. Papers reflecting both aspects were submitted for the workshop (download DLRW 2014 Outline). 

It provided participants with opportunities to report on current activities, highlight gaps, constraints and possibilities, and to discuss and agree collective steps and actions. 

 

DLRW2014 delegates v3

 

As the following list demonstrates delegates came from a diverse range of institutions: universities, libraries, galleries and archives, and the private sector.

Matthew Addis, Arkivum

Fran Baker, John Rylands Library, University of Manchester 

Thom Carter, London School of Economics Library

Dianne Dietrich, Cornell University Library

Rachel Foss, British Library

Claus Jensen, Royal Library of Denmark and Copenhagen University Library

Jeremy Leighton John, British Library

Svenja Kunze, Bodleian Library, University of Oxford

John Langdon, Tate Gallery

Cal Lee, University of North Carolina at Chapel Hill

Caroline Martin, John Rylands Library, University of Manchester (contributor to paper)

Helen Melody, British Library

Stephen Rigden, National Library of Scotland

Elinor Robinson, London School of Economics Library

Susan Thomas, Bodleian Library, University of Oxford 

Dorothy Waugh, Emory University

 

  DLRWtables1

 

I gave an introduction to the original Digital Lives Research project and a brief overview of the ensuing internal projects at the British Library (Personal Digital Manuscripts and Personal Digital Archives), while Aquiles Alencar-Brayner gave an introduction to Digital Scholarship at the British Library including the award winning BL Labs project. 

Short talks presented overviews of current activities at the National Library of Scotland, University of Manchester and London School of Economics and the establishment of forensic and digital archiving at these institutions, including the value of a secure and dedicated workspace, the use of a forensic tool for examining large numbers of emails, the integration of forensic techniques within existing working environments and practices, and the importance of tailored training. 

Other talks were directed at specific applications of forensic tools in the preservation of complex digital objects in the Rose Goldsen Archive of New Media at Cornell University Library, the capture of computer games at the Royal Library of Denmark, and the challenges of capturing the floppy disks of poet and author Lucille Clifton at Emory University, these media being derived from a Magnavox Videowriter

 

  HKA scrnsht

 

My colleagues Rachel Foss and Helen Melody and I presented a paper on the Hanif Kureishi Archive, a collection of paper and digital materials, recently acquired by the British Library’s literary curators: specifically, outlining the use of digital forensics for appraisal and textual analysis.  

Prior to acquisition Rachel and I previewed the archive using fuzzy hashing (a technique for quickly identifying similar files). 

  

HKA1 fmpro scrnsht
 

After the archive was obtained and forensically captured, metadata were extracted from the digital objects and made available along with curatorial versions of the text documents, and Helen catalogued them using the British Library’s Integrated Archives and Manuscripts System

 

  HKA1 catalogue scrnsht2

 

HKA1 catalogue scrnsht3
  

HKA1 catalogue scrnsht1

One of the most exciting aspects of the archive is a set of 53 drafts of Hanif Kureishi’s novel Something To Tell You, which Rachel, Helen and I decided to explore as an example for the workshop. 

 

 

  HKA1 Graph Logical Size (log) vs Modif Date
              

Figure 1. Logical file size plotted against last modified date: an editing history

 

We used the sdhash tool (produced by Vassil Roussev of the University of New Orleans and incorporated within the BitCurator framework). Like the ssdeep fuzzy hashing tool (which has been incorporated into Forensic Toolkit, FTK), it identifies similarities among files but uses a distinct approach.

  Sdhash STTY scrnsht2

With BitCurator it is possible to direct sdhash at a set of files and ask the tool to first create the similarity digests and then to make pairwise comparisons across the similarity digests for all files: each pair of files being assigned a similarity digest score. 

 

  Sdhash apparent date diff modulus dots 2 crp

 

Figure 2. Similarity score (sdhash) plotted against absolute difference in indicated dates (days) between files (each point represents a pair of draft files): apparently and generally the greater the number of days between the files of a pair, the lower the similarity score

 

This is a preliminary analysis and readers of this blog entry who are familiar with statistical methods may recognise that it might be better to use partial regression or a similar statistical approach. A further small point, as Dr Roussev has emphasised, a 100% similarity does not mean that the files are identical; cryptographic hashes can serve this purpose and are to be incorporated in future versions of the sdhash tool which is still under active development.  

 

Following the more formal talks we began an open discussion with the aim to identify some priority topics, and subsequently we divided into three groups to address: metadata, access and sensitivity, respectively, concluding the first day. On the second day, we focussed the conversation even more and as two groups addressed cataloguing and metadata on the one hand, and tools and workflows on the other hand. 

Steps towards specific conclusions and recommended actions were made in preparation for publication and dissemination. 

The desire to continue and extend the collaboration was strongly expressed, and fittingly Cal Lee concluded the workshop by updating us on developments of the BitCurator platform and the launch of the BitCurator Consortium, an important invitation for institutions to participate and for individuals to collaborate. 

BitCurator is going from strength to strength: receiving an extension of the project, formally launching the BitCurator Consortium, and releasing Version 1.0 of the BitCurator software.  

 

Many congratulations to Fran and Caroline on their email project becoming a finalist for the Digital Preservation Awards 2014: the University of Manchester Library’s Carcanet Press Archive project which among many things explored the use of the forensic tool Email Examiner along with Aid4Mail (which incidentally has a forensic version). 

 

  IMGP6218crp

 

The workshop was jointly organised by me, Cal Lee (University of North Carolina at Chapel Hill) and Susan Thomas (Bodleian Library, University of Oxford).  

Very many thanks to the delegates for all of their participation over the two days. 

Jeremy Leighton John, Curator of eMSS 

@emsscurator

15 September 2014

Finding Jokes - The Victorian Meme Machine

Posted on behalf of Bob Nicholson.

The Victorian Meme Machine is a collaboration between the British Library Labs and Dr Bob Nicholson (Edge Hill University). The project will create an extensive database of Victorian jokes and then experiment with ways to recirculate them out over social media. For an introduction to the project, take a look at this blog post or this video presentation.

Vmm_background
Stage One: Finding Jokes

Whenever I tell people that I’m working with the British Library to develop an archive of nineteenth-century jokes, they often look a bit confused. “I didn’t think the Victorians had a sense of humour”, somebody told me recently. This is a common misconception. We’re all used to thinking of the Victorians as dour and humourless; as a people who were, famously, ‘not amused’. But this couldn’t be further from the truth. In fact, jokes circulated at all levels of Victorian culture. While most of them have now been lost to history, a significant number have survived in the pages of books, periodicals, newspapers, playbills, adverts, diaries, songbooks, and other pieces of printed ephemera. There are probably millions of Victorian jokes sitting in libraries and archives just waiting to be rediscovered – the challenge lies in finding them.   

In truth, we don’t know how many Victorian gags have been preserved in the British Library’s digital collections. Type the word ‘jokes’ into the British Newspaper Archive or the JISC Historical Texts collection and you’ll find a handful of them fairly quickly. But this is just the tip of the iceberg. There are many more jests hidden deeper in these archives. Unfortunately, they aren’t easy to uncover. Some appear under peculiar titles, others are scattered around as unmarked column fillers, and many have aged so poorly that they no longer look like jokes at all. Figuring out an effective way to find and isolate these scattered fragments of Victorian humour is one of the main aims of our project. Here’s how we’re approaching it.

Firstly, we’ve decided to focus our attention on two main sources: books and newspapers. While it’s certainly possible to find jokes elsewhere, these sources provide the largest concentrations of material. A dedicated joke book, such as this Book of Humour, Wit and Wisdom, contains hundreds of viable jokes in a single package. Similarly, many Victorian newspapers carried weekly joke columns containing around 30 gags at a time – over the course of a year, a regularly printed column yields more than 1,500 jests. If we can develop an efficient way to extract jokes from these texts then we’ll have a good chance of meeting our target of 1 million gags.

  Jokes_background

Our initial searches have focused on two digital collections:

1)      The 19th Century British Library Newspapers Database.

2)      A collection of nineteenth-century books digitised by Microsoft.

In order to interrogate these databases we’ve compiled a continually-expanding list of search terms. Obvious keywords like ‘jokes’ and ‘jests’ have proven to be effective, but we’ve also found material using words like ‘quips’, ‘cranks’, ‘wit’, ‘fun’, ‘jingles’, ‘humour’, ‘laugh’, ‘comic’, ‘snaps’, and ‘siftings’. However, while these general search terms are useful, they don’t catch everything. Consider these peculiarly-named columns from the Hampshire Telegraph:

  Joke_snippets

At first glance, they look like recipes for buckwheat cakes – in fact, they’re columns of imported American jokes named after what was evidently considered to be a characteristically Yankee delicacy. I would never have found these columns using conventional keyword searches. Uncovering material like this is much more laborious, and requires us to manually look for peculiarly-named books and joke columns.

In the case of newspapers, this requires a bit of educated guesswork. Most joke columns appeared in popular weekly papers, or in the weekend editions of mass-market dailies. So, weighty, morning broadsheets like the London Times are unlikely to yield many gags. Similarly, while the placement of jokes columns varied from paper to paper (and sometimes from issue to issue), they were typically placed at the back of the paper alongside children’s columns, fashion advice, recipes, and other miscellaneous tit-bits of entertainment. Finally, once a newspaper has been proven to contain one set of joke columns, the likelihood is that more will be found under other names. For example, initial keyword searches seem to suggest that the Newcastle Weekly Courant discontinued its long-running ‘American Humour’ column in 1888. In fact, the column was simply renamed ‘Yankee Snacks’ and continued to appear under this title for another 8 years.

Tracking a single change of identity like this is fairly straightforward; once the new title has been identified we simply need to add it to our list of search terms. Unfortunately, the editorial whims of some newspapers are harder to follow. For example, the Hampshire Telegraph often scattered multiple joke columns throughout a single issue. To make things even more complicated, they tended to rename and reposition these columns every couple of weeks. Here’s a sample of the paper’s American humour columns, all drawn from the first 6 months of 1892:

Snippets_black_background
For papers like this, the only option is to manually locate jokes columns one at a time. In other words, while our initial set of core keywords should enable us to find and extract thousands of joke columns fairly quickly, more nuanced (and more laborious) methods will be required in order to get the rest.

It’s important to stress that jokes were not always printed in organised collections. Some newspapers mixed humour with other pieces of entertaining miscellany under titles such as ‘Varieties’ or ‘Our Carpet Bag’. The same is true of books, which often combined jokes with short stories, comic songs, and material for parlour games. While it’s fairly easy to find these collections, recognising and filtering out the jokes is more problematic. As our project develops, we’d like to experiment with some kind of joke-detection tool that pick out content with similar formatting and linguistic characteristics to the jokes we’ve already found. For example, conversational jokes usually have capitalised names (or pronouns) followed by a colon and, in some cases, include a descriptive phrase enclosed in brackets. So, if a text includes strings of characters like “Jack (…):” or “She (…):“ then there’s a good chance that it might be a joke. Similarly, many jokes begin with a capitalised title followed by a full-stop and a hyphen, and end with an italicised attribution. Here’s a characteristic example of all three trends in action:

Small_snippet

Unfortunately, conventional search interfaces aren’t designed to recognise nuances in punctuation, so we’ll need to build something ourselves. For now, we’ve chosen to focus our efforts on harvesting the low-hanging fruit found in clearly defined collections of jokes.

                The project is still in the pilot stage, but we’ve already identified the locations of more than 100,000 jokes. This is more than enough for our current purposes, but I hope we’ll be able to push onwards towards a million as the project expands. The most effective way to do this may well to be harness the power of crowdsourcing and invite users of the database to help us uncover new sources. It’s clear from our initial efforts that a fully-automated approach won’t be effective. Finding and extracting large quantities of jokes – or, indeed, any specific type of content – from among the millions of pages of books and newspapers held in the library’s collection requires a combination of computer-based searching and human intervention. If we can bring more people on board we’ll be able to find and process the jokes much faster.

Finding gags is just the first step. In the next blog post I’ll explain how we’re extracting joke columns from the library’s digital collections, importing them into our own database, and transcribing their contents. Stay tuned!

 

27 August 2014

The British Library Meets Burning Man

Posted on behalf of David Normal (edited by Sophie McIvor and Mahendra Mahey)

The British Library meets Burning Man…

In December 2013 the British Library uploaded over a million images from our 19th century digitised books onto Flickr Commons, with the invitation for anyone to remix, re-use and re-purpose the content as they wish.

The response from the online community was outstanding, but by far the most unexpected use of the British Library’s Flickr Commons images is happening this week - the collection has inspired four large-scale artworks on display at this year’s Burning Man festival in the Nevada desert, created by David Normal, a California-based artist with a special interest in 19th century illustration.

David_normal_light_box_errecting_burning_man_1One of David’s four paintings being installed at Burning Man 2014
(photographed by Andrew Spalding)

 
A video showing the process of one of the lightboxes being installed at Burning Man 2014 
(Courtesy of David Normal)

Before he headed off to the desert to install his “Crossroads of Curiosity’ artworks at the festival, we spoke to David about how this came about, and how he used the image collection:

What first attracted you to the idea of using 19th Century illustrations in your art?

Beginning as a teenager I was interested in making “seamless” collages, in which the elements go together so smoothly that it looks as though it were all one illustration. I love Max Ernst’s collage novel, “Une Semaine De Bonte” which took this seamless collage aesthetic to its zenith using 19th century illustration.  Recently, I began painting over digital collage prints, and this process opened up a lot of possibilities, to the point where I felt that I could use the 19th century in a fresh way that is not derivative of Ernst’s work.

How did you come across the British Library’s Flickr Commons collection?

The guitarist of the punk band “Flipper” mentioned something about it and at the time I had already initiated the plan to create paintings based on 19th Century images for Burning Man, and so learning of this vast online collection was thrilling and truly fortuitous since it was exactly what I was looking for.

How has the Library’s collection informed your artwork?

After being introduced to the collection I realized that everything I needed was there.  I decided to use the collection exclusively, and make that one of the hallmarks of the project. Indeed, I feel that the “Crossroads of Curiosity” celebrates this amazing collection.

One of the most striking aspects of the collection is its colossal size.  Having a lot of material to choose from is important in collage making, since out of excess come the chance juxtapositions that are so magical.

Another thing that was very helpful to me was the randomness.  The majority of the images are in no particular order in the photostream, and viewing the images in succession was like taking a journey through a landscape of illustrated symbols. 

How did you identify which images you wanted to use?

Certain images have some symbolic power or strangeness that intrigues me and those are the images I am drawn too.  This has to do with thematic preoccupations that percolate up from my subconscious on the one hand, and with my taste in things on the other, and also with the specific theme I am working with on the Burning Man project, which is “Caravansary - The Silk Road”.  I have favorited nearly 3000 images on my own Flickr page.

What happens next?

I start with selecting several images that I think will go together well.  I bring them into Photoshop and then begin to arrange and play with them.  As the composition develops the images are increasingly cleaned up, edited, and composed together. 

These images below outline the development of the collage painting, “Conflamingulation”, one of four which will be featured on 8’x20’ lightpanels at Burning Man:

David_normal_flickr_commons_favouritesThe chance conjunction of the machine gunner and the skunk suggests an idea for a collage.


 David_normal_machine_gun_skunkA rough collage is made.

David_normal_collage_1Different arrangements are experimented with.

A final version is arrived at that is the basis of the painting.

David_normal_collage_3Finished painting: 
“Conflamingulation”, Acrylic on polypropylene film, lightpanel,  35” x 96”, 2014

Which is your favourite of all the images you’ve discovered on the Flickr Commons collection?

I think I have not viewed more than 10% of the collection altogether, so I can’t say that I have enough familiarity to choose a favourite fairly.  However, if I had to select a single image then perhaps I would choose this skunk because of his great versatility as a piece of clip art.

  David_normal_skunk

Image available at the British Library Flickr Commons page
Taken from  page 42 of the book, OUR EARTH AND ITS STORY, A Popular Treatise on Physical Geography, Edited by Robert Brown, Published by Cassell and Company Limited

What is special about a collection like this?

Being able to use illustrations as a way of approaching books is interesting - typically the reverse is the case;  reading a book you find the illustrations and not vice versa.

What do you hope that people at Burning Man will take from the finished pieces?

Larry Harvey, the director of Burning Man, has said that he hopes the pieces will evoke a feeling of “romance”, in the sense of the romanticism of myths and fairytales such as the Arabian Nights.  I will concur with that.  The pieces are meant to show the intersections of distant times, places, peoples and things in humorous and thought provoking ways.  It is a cabinet of curiosities that has opened up to encompass the world in series of dramatic tableaux.  I hope the Crossroads of Curiosity fills the viewer with wonder, and arouses their own curiosity.

David Normal’s ‘Crossroads of Curiosity’ artworks are on display at the Burning Man Festival from 25 August – 1 September.

Here is one his illuminated panels from Burning Man 2014:

David_normal_light_illuminatedOne of David Normal's illuminated panels for Burning Man 2014.

You can discover more about his work at www.davidnormal.com.

 

20 August 2014

Interactive Fiction Writer-in-Residence for the Lines in the Ice Exhibition

From this week onwards, visitors to the Library may come face-to-chest with the institution’s very own example of cryptozoology. An enormous specimen, hunched (though only when passing through doorways) and pallid from too much time spent in the Rare Books Reading Room, this survival of an earlier era can most often be found in the foyer lapping at the water fountain, reading quietly on his iPad, or roaming the canteen, hunting for delicious vegetarian prey.

The British Library is very pleased to welcome the Library's first Interactive Fiction Writer-in-Residence: Rob Sherman is a writer and games designer whose first digital project, the enormous and sprawling browser-based storygame The Black Crown Project, was published by Random House and challenged digital expectations in the publishing industry. Another notable project is his recent Twine game for Shelter about the housing crisis; called The Spare Set .

Rob has successfully acquired CreativeWorks London funding from their entrepreneur-in-residence scheme; to be the attached digital writer for the Library’s upcoming exhibition, Lines In The Ice, which will display documents, maps and paraphernalia relating to arctic exploration expeditions, including John Franklin’s ill-fated voyage to find the Northwest Passage in 1845. The ensuing tales of cannibalism, exposure and desperate contact with the local Inuit are sure to suit Rob’s nightmarish yet delicate prose, once compared to ‘knitting intestines’ by a staunch admirer.

As well as being glimpsed in the corner of your eye as you walk around the Library, Rob will be researching the collections and producing original and unique digital and physical works to accompany the exhibition. While the details are still being finalised, rest assured that you will not need to visit the Library physically to experience Rob’s work; everything will be released online, and any physical works will be digitised. He will also be documenting his progress via a research blog, and hosting events, where he will be sharing his work and documenting his journey into the farthest reaches of our collections.

However, he would like to point out that he is not as scary and legendary as all that, and if you spot him, he will happily stop for a chat.

Rob Sherman cropped

Rob Sherman, Interactive Fiction Writer-in-Residence for the Lines in the Ice Exhibition

http://bonfiredog.co.uk/

@rob_sherman

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs