Digital scholarship blog

Enabling innovative research with British Library digital collections

208 posts categorized "Experiments"

31 March 2023

Mapping Caribbean Diasporic Networks through the Correspondence of Andrew Salkey

This is a guest post by Natalie Lucy, a PhD student at University College London, who recently undertook a British Library placement to work on a project Mapping Caribbean Diasporic Networks through the correspondence of Andrew Salkey.

Project Objectives

The project, supervised by curators Eleanor Casson and Stella Wisdom, focussed on the extensive correspondence contained within Andrew Salkey’s archive. One of the initial objectives was to digitally depict the movement of key Caribbean writers and artists, as it is evidenced within the correspondence, many of whom travelled between Britain and the Caribbean as well as the United States, Central and South America and Africa. Although Salkey corresponded with a diverse range of people, we therefore focused on the letters in his archive which were from Caribbean writers and academics and which illustrated  patterns of movement of the Caribbean diaspora. Much of the correspondence stems from 1960s and 1970s, a time when Andrew Salkey was particularly active both in the Caribbean Artists Movement and, as a writer and broadcaster, at the BBC.

Photograph of Andrew Salkey's head and shoulders in profile
Photograph of Andrew Salkey

Andrew Salkey was unusual not only for the panoply of writers, artists and politicians with whom he was connected, but that he sustained those relationships, carefully preserving the correspondence which resulted from those networks. My personal interest in this project stemmed from the fact that my PhD seeks to consider the ways that the Caribbean trickster character, Anancy, has historically been reinvented to say something about heritage and identity. Significant to that question was the way that the Caribbean Artists Movement, a dynamic group of artists and writers formed in London in the mid-1960s, and of which Andrew Salkey was a founder, appropriated Anancy, reasserting him and the folktales to convey something of a literary ‘voice’ for the Caribbean. For this reason, I was also interested in the writing networks which were evidenced within the correspondence, together with their impact.

What is Gephi?

Prior to starting the project, Eleanor, who had catalogued the Andrew Salkey archive and Digital Curator, Stella, had identified Gephi as a possible software application through which to visualise this data. Gephi has been used in a variety of projects, including several at Harvard University, examples of the breadth and diversity of those initiatives can be found here. Several of these projects have social networks or historical trading routes as their focus, with obvious parallels to this project. Others notably use correspondence as their main data.

Gathering the Data

Andrew Salkey was known as something of a chronicler. He was interested in letters and travel and was also a serious collector of stamps. As such, he had not only retained the majority of the letters he received but categorised them. Eleanor had originally identified potential correspondents who might be useful to the project, selecting writers who travelled widely, whose correspondence had been separately stored by Salkey, partly because of its volume, and who might be of wider interest to the public. These included the acclaimed Caribbean writers, Samuel Selvon, George Lamming, Jan Carew and Edward Kamau Brathwaite and publishers and political activists, Jessica and Eric Huntley.

Our initial intention was to limit the data to simple facts which could easily be gleaned from the letters. Gephi required that we did so on a spreadsheet ,which had to conform to a particular format. In the first stages of the project, the data was confined to the dates and location of the correspondence, information which could suggest the patterns of movement within the diaspora. However, the letters were so rich in detail, that we ultimately recorded other information. This included any additional travel taken by any of the correspondents,  and which was clearly evidenced in the letters, together with any passages from the correspondence which demonstrated either something of the nature and quality of the friendships or, alternatively, the mutual benefit of those relationships to the careers of so many of the writers.

Creating a visual network

Dr Duncan Hay was invited to collaborate with me on this project, as he has considerable expertise in this field, his research interests include web mapping for culture and heritage and data visualisation for literary criticism.  After the initial data was collated, we discussed with Duncan what visualisations could be created. It became apparent early on that creating a visualisation of the social networks, as opposed to the patterns of movement, might be relatively straightforward via Gephi, an application which was particularly useful for this type of graph. I had prepared a spreadsheet but, Gephi requires the data to be presented in a strictly consistent way which meant that any anomalies had to be eradicated and the data effectively ‘cleaned up’ using Open Refine. Gephi also requires that information is presented by way of a system of ‘nodes’; ‘edges’  and ‘attributes’ with corresponding spreadsheet columns. In our project, the ‘nodes’ referred to Andrew Salkey and each of the correspondents and other individuals of interest who were specifically referred to within the correspondence. The edges referred to the way that those people were connected which, in this case, was through correspondence. However, what added to the potential of the project was that these nodes and edges could be further described by reference to ‘attributes.’ The possibility of assigning a range of ‘attributes’ to each of the correspondents allowed a wealth of additional information to be provided about the networks. As a consequence, and in order to make any visualisation as informative as possible, I also added brief biographical information for each of the writers and artists to be inputted as ‘attributes’ together with some explanation of the nature of the networks that were being illustrated.

The visual illustration below shows not only the quantity of letters from the sample of correspondents to Andrew Salkey (the pink lines),  but also shows which other correspondents formed part of those networks and were referenced as friends or contacts within specific items of correspondence. For example, George Lamming references academic, Rex Nettleford and writer and activist, Claudia Jones, the founder of the Notting Hill Carnival, in his correspondence, connections which are depicted in grey. 

Data visualisation of nodes and lines representing Andrew Salkey's Correspondence Network
Gephi: Andrew Salkey correspondence network

The aim was, however, for the visualisation to also be interactive. This required considerable further manipulation of the format and tools. In this illustration you can see the information that is revealed about the prominent Barbadian writer, George Lamming which, in an interactive format, can be accessed via the ‘i’ symbols beside many of the nodes coloured in green.  

Whilst Gephi was a useful tool with which to illustrate the networks, it was less helpful as a way to demonstrate the patterns of movement, one of the primary objectives of the project. A challenge was, therefore, to create a map which could be both interactive and illustrative of the specific locations of the correspondents as well as their movement over time. With Duncan’s input and expertise, we opted for a hybrid approach, utilising two principal ways to illustrate the data: we used Gephi to create a visualisation of the ‘networks’ (above) and another software tool, Kepler.gl, to show the diasporic movement.

A static version of what ultimately will be a ‘moving’ map (illustrating correspondence with reference to person, date and location) is shown below. As well as demonstrating patterns of movement, it should also be possible to access information about specific letters as well as their shelf numbers through this map, hopefully making the archive more accessible.

Data visualisation showing lines connecting countries on a map showing part of the Americas, Europe and Africa
Patterns of diasporic movement from Andrew Salkey's correspondence, illustrated in Kepler.gl

Whilst we are still exploring the potential of this project and how it might intersect with other areas of research and archives, it has already revealed something of the benefits of this type of data visualisation. For example, a project of this type could be used as an educational tool, providing something of a simple, but dynamic, introduction to the Caribbean Artists Movement. Being able to visualise the project has also allowed us to input information which confirms where specific letters of interest might be found within the archive. Ultimately, it is hoped that the project will offer ways to make a rich, yet arguably undervalued, archive more accessible to a wider audience with the potential to replicate something of an introductory model, or ‘pilot’ for further archives in the future. 

20 March 2023

Digital Storytelling at the 2023 BL Labs Symposium

One half of the 2023 British Library Labs Symposium will be dedicated to digital storytelling. This has been a significant part of BL Labs work over the years; we have collaborated with experimental artists from David Normal’s creative reuse of British Library Flickr images for his giant lightbox collage Crossroads of Curiosity installation at the 2014 Burning Man festival, to working with first runner up in the BL Labs 2016 competition Michael Takeo Magruder on his 2019 exhibition Imaginary Cities.

People looking at lightbox collage artworks
Crossroads of Curiosity by David Normal

In the last few years, due to the COVID-19 pandemic disruption, digital stories and engagement have become mainstream across the Galleries, Libraries, Archives and Museums (GLAM) sector. New types of digital storytelling mixing social media, online exhibitions embedding narratives and digital objects, and interactive online events reaching entirely new audiences, delighted us all. However, we also discovered that there can be a saturation point with online engagement, and that many digital developments have some way to go to reach their full potential.

As we are hopefully entering healthier times, new opportunities to mix virtual and physical worlds are starting to open up. With this in mind, we felt that this is the right moment to explore a new age of digital storytelling at the 2023 BL Labs Symposium.

The idea is to explore what is changing in the world of technological possibilities and how they are continuing to develop. We have envisaged a journey that will take us from the big picture of the arising digital possibilities to more specific examples from the British Library’s work. In true BL Labs spirit we will also celebrate initiatives that creatively reuse the Library’s digital collections.

To help us look into the big trends, we are delighted to be joined by Zillah Watson, whose extraordinary breath of experience working with BBC, Meta, BFI and Royal Shakespeare Company amongst many others, will help us to get a deeper sense of the opportunities of virtual reality (VR). Zillah will look into what it means, not just to be dazzled with technological possibilities, but also to enter the magic of storytelling.

Talking of magic, we are lucky to welcome award winning Director, Anrick Bregman, and award winning Producer, Grace Baird. Anrick and Grace will take us deeper into the potential of using VR to uncover hidden stories. Anrick’s film A Convict Story is an interactive VR project built on British Library data that brings to life a story discovered by the linking of data from centuries ago, using data research powered by machine learning.

Even closer to home, our own Stella Wisdom and Ian Cooke, will talk about their current work on curating the British Library’s forthcoming Digital Storytelling exhibition (2 June – 15 October 2023), which will explore the ways technology provides opportunities to transform and enhance the way writers write and readers engage. Drawing on the Library’s collection of contemporary digital publications and emerging formats to highlight the work of innovative and experimental writers. It will feature interactive works that invite and respond to user input, reading experiences influenced by data feeds, and immersive story worlds created using multiple platforms and audience participation. This is an exciting development, as we can see how earlier British Library creative digital experiments, collaborations and research projects are building into an exhibition in its own right.

We hope you can join us for discussion at the BL Labs Symposium on Thursday 30 March 2023. For the full programme, and further information on all our speakers, please read our earlier blog post.

 You can book your place here

02 March 2023

BL Labs Symposium 2023: Programme and Speakers announced

Book illustration of a shelf of books with "Informed" spelled across their spines
British Library digitised image from BL Flickr Collection - When Life is Young: a collection of verse for boys and girls by Mary Elizabeth Dodge

The BL Labs Symposium 2023 is taking place on Thursday 30th March as an online webinar.

This year we will be exploring two themes – digital storytelling and innovative uses of data and AI. As always, we are aiming to hear from some guest speakers, as well as showcase the recent work using the British Library digital collections. The programme also include an update of BL Labs, including our new website and services.

We hope this will spark many further ideas and collaborations.

The full programme for the BL Labs Symposium is as follows:

14.00 – Welcome

Part 1: Digital Storytelling

14.05 – How to bring the magic of VR to audiences – Zillah Watson

14.15 – There Exists – A VR experience about hidden narratives – Anrick Bregman and Grace Baird

14.25 – Curating a Digital Storytelling exhibition – Stella Wisdom and Ian Cooke

14.35 – Panel Q&A

15.00 – In Memoriam Maurice Nicholson

15.05 – Break

15.15 – BL Labs Update – Silvija Aurylaite

Part 2 – Data and AI

15.35 - Ithaca: Restoring and attributing ancient texts using deep neural networks - Yannis Assael

15.45 – Living with Machines: Using digitised newspaper collections from the British Library in a data science project – Kalle Westerling

15.55 – Locating a National Collections through audience research. How cultural heritage organisations can engage the public using geospatial data – Gethin Rees

16.05 – Panel Q&A

16.30 – END

You can register for the BL Labs Symposium here.

We are currently planning an evening networking session at the British Library, starting at 18.30 for those who can join us in London. We are aware of the train strike planned for this day, so will confirm details nearer the time.

Below are a few details about our speakers:

Head and shoulders photograph of Zillah Watson
Zillah Watson

Zillah Watson

Zillah Watson led the BBC's award winning VR studio, winning a host of awards at festivals around the world, including an Emmy nomination. She led pioneering work taking VR to audiences in libraries around the UK. She now consults on the metaverse, and content and audience growth strategies for organisations including Meta, London & Partners, the BFI, International News Media Association, Arts Council England, and the Royal Shakespeare Company. She's had a long and varied media career, including 20 years at the BBC, where she was a TV and radio current affairs journalist, head of editorial standards for BBC Radio and led R&D research on future content. She is a lecturer at UCL and the new London Interdisciplinary School. She recently co-founded Phase Space, a tech for good start-up to use VR to support mental health for students and young people.

Head and shoulders photograph of Anrick Bregman
Anrick Bregman

Anrick Bregman

Anrick is director and founder of an R&D studio that explores the future of spatial immersive storytelling by creating experiences built with virtual and augmented reality, computer vision and machine learning. His mission is to find new and interesting ways to merge technology with meaningful narratives which explore the human experience.

Head and shoulders photograph of Grace Baird
Grace Baird

Grace Baird

Grace is a Producer with twelve years' experience working on audience-centred projects in the Arts, TV, and Immersive industries. She is experienced in immersive and digital production and distribution, particularly entertainment content. Grace has produced a variety of innovative projects including site-specific installations, an interactive feature-film, and social-VR experiences.

Head and shoulders photograph of Stella Wisdom
Stella Wisdom

Stella Wisdom

Stella is Digital Curator for Contemporary British Collections at the British Library. Promoting creative and innovative reuse of digital collections, encouraging game making and digital storytelling in libraries, including collaborating widely with The National Videogame Museum, AdventureX, International Games Month in Libraries, the New Media Writing Prize and on research projects with University College London’s Institute of Education and Lancaster University. Stella research interests also explore the archiving of complex born digital material, examining methods for the collection, preservation and curation of narrative apps, digital comics and interactive fiction.

Head and shoulders photograph of Ian Cooke
Ian Cooke

Ian Cooke

Ian is Head of Contemporary British Publications at the British Library. He has worked in academic and research libraries with a focus on 20th and 21st-century history and social sciences. His interests are in the role of publishing in contemporary communications, and the everyday experience and expression of politics. 

Head and shoulders photograph of Silvija Aurylaite
Silvija Aurylaite

Silvija Aurylaite

Silvija Aurylaite is BL Labs Manager. She previously worked on the British Library Heritage Made Digital Programme. Her interests and domain of expertise include copyright, curation of digital collections of museums, archives and libraries, data science, design, creativity and social entrepreneurship. Previously, she was an initiator of a new publishing project Public Domain City that aimed to bring a new life into curious & obscure historical books on science, technology and nature. She also organized a retrospective dance film festival Dance in Film, Choreography, Body and Image, and media dance educational activities at the National Gallery of Art in Vilnius.

Head and shoulders photograph of Yannis Assael
Yannis Assael

Yannis Assael

Dr. Yannis Assael is a Staff Research Scientist at Google DeepMind working on Artificial Intelligence, and he is featured in Forbes' "30 Under 30" distinguished scientists of Europe. In 2013, he graduated from the Department of Applied Informatics, University of Macedonia, and with full scholarships, he did an MSc at the University of Oxford, finishing first in his year, and an MRes at Imperial College London. In 2016, he returned to Oxford for a DPhil degree with a Google DeepMind scholarship, and after a series of research breakthroughs and entrepreneurial activities, he started as a researcher at Google DeepMind. His contributions range from audio-visual speech recognition to multi-agent communication and AI for culture and the study of damaged ancient texts. Throughout this time, his research has attracted the media's attention several times, has been featured on the cover of the scientific journal Nature, and focuses on contributing to and expanding the greater good.

Head and shoulders photograph of Kalle Westerling
Kalle Westerling

Kalle Westerling

Dr Kalle Westerling is a Digital Humanities Research Software Engineer with Living with Machines, a collaboration between the British Library, the Alan Turing Institute, and researchers from a range of UK universities. Kalle holds a Ph.D. in Theatre and Performance Studies from The Graduate Center, City University of New York (CUNY), where he visualised and analysed networks of itinerant nightlife performers around New York City in the 1930s. Prior to joining the British Library, Kalle managed the Scholars program at HASTAC and the Digital Humanities Research Institute at CUNY, both efforts across higher education institutions in the United States, aiming to build nation-wide infrastructures and communities for digital humanities skill-building.

Head and shoulders photograph of Gethin Rees
Gethin Rees

Gethin Rees

Gethin’s role at the British Library includes helping to manage the non-print legal deposit of digital maps and coordinating the Georeferencer crowd-sourcing project. He is interested in helping research projects to get the most out of geospatial data and tools and was principal investigator of the AHRC-funded Locating a National Collection project. Before taking up his current position in 2018 he worked on two collaborative history projects funded by the ERC and as a software developer. His PhD in archaeology from University of Cambridge made use of Geographical Information Systems for spatial analysis and data management.

20 April 2022

Importing images into Zooniverse with a IIIF manifest: introducing an experimental feature

Digital Curator Dr Mia Ridge shares news from a collaboration between the British Library and Zooniverse that means you can more easily create crowdsourcing projects with cultural heritage collections. There's a related blog post on Zooniverse, Fun with IIIF.

IIIF manifests - text files that tell software how to display images, sound or video files alongside metadata and other information about them - might not sound exciting, but by linking to them, you can view and annotate collections from around the world. The IIIF (International Image Interoperability Framework) standard makes images (or audio, video or 3D files) more re-usable - they can be displayed on another site alongside the original metadata and information provided by the source institution. If an institution updates a manifest - perhaps adding information from updated cataloguing or crowdsourcing - any sites that display that image automatically gets the updated metadata.

Playbill showing the title after other large text
Playbill showing the title after other large text

We've posted before about how we used IIIF manifests as the basis for our In the Spotlight crowdsourced tasks on LibCrowds.com. Playbills are great candidates for crowdsourcing because they are hard to transcribe automatically, and the layout and information present varies a lot. Using IIIF meant that we could access images of playbills directly from the British Library servers without needing server space and extra processing to make local copies. You didn't need technical knowledge to copy a manifest address and add a new volume of playbills to In the Spotlight. This worked well for a couple of years, but over time we'd found it difficult to maintain bespoke software for LibCrowds.

When we started looking for alternatives, the Zooniverse platform was an obvious option. Zooniverse hosts dozens of historical or cultural heritage projects, and hundreds of citizen science projects. It has millions of volunteers, and a 'project builder' that means anyone can create a crowdsourcing project - for free! We'd already started using Zooniverse for other Library crowdsourcing projects such as Living with Machines, which showed us how powerful the platform can be for reaching potential volunteers. 

But that experience also showed us how complicated the process of getting images and metadata onto Zooniverse could be. Using Zooniverse for volumes of playbills for In the Spotlight would require some specialist knowledge. We'd need to download images from our servers, resize them, generate a 'manifest' list of images and metadata, then upload it all to Zooniverse; and repeat that for each of the dozens of volumes of digitised playbills.

Fast forward to summer 2021, when we had the opportunity to put a small amount of funding into some development work by Zooniverse. I'd already collaborated with Sam Blickhan at Zooniverse on the Collective Wisdom project, so it was easy to drop her a line and ask if they had any plans or interest in supporting IIIF. It turns out they had, but hadn't had the resources or an interested organisation necessary before.

We came up with a brief outline of what the work needed to do, taking the ability to recreate some of the functionality of In the Spotlight on Zooniverse as a goal. Therefore, 'the ability to add subject sets via IIIF manifest links' was key. ('Subject set' is Zooniverse-speak for 'set of images or other media' that are the basis of crowdsourcing tasks.) And of course we wanted the ability to set up some crowdsourcing tasks with those items… The Zooniverse developer, Jim O'Donnell, shared his work in progress on GitHub, and I was very easily able to set up a test project and ask people to help create sample data for further testing. 

If you have a Zooniverse project and a IIIF address to hand, you can try out the import for yourself: add 'subject-sets/iiif?env=production' to your project builder URL. e.g. if your project is number #xxx then the URL to access the IIIF manifest import would be https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production

Paste a manifest URL into the box. The platform parses the file to present a list of metadata fields, which you can flag as hidden or visible in the subject viewer (public task interface). When you're happy, you can click a button to upload the manifest as a new subject set (like a folder of items), and your images are imported. (Don't worry if it says '0 subjects).

 

Screenshot of manifest import screen
Screenshot of manifest import screen

You can try out our live task and help create real data for testing ingest processes at ​​https://frontend.preview.zooniverse.org/projects/bldigital/in-the-spotlight/classify

This is a very brief introduction, with more to come on managing data exports and IIIF annotations once you've set up, tested and launched a crowdsourced workflow (task). We'd love to hear from you - how might this be useful? What issues do you foresee? How might you want to expand or build on this functionality? Email [email protected] or tweet @mia_out @LibCrowds. You can also comment on GitHub https://github.com/zooniverse/Panoptes-Front-End/pull/6095 or https://github.com/zooniverse/iiif-annotations

Digital work in libraries is always collaborative, so I'd like to thank British Library colleagues in Finance, Procurement, Technology, Collection Metadata Services and various Collections departments; the Zooniverse volunteers who helped test our first task and of course the Zooniverse team, especially Sam, Jim and Chris for their work on this.

 

12 April 2022

Making British Library collections (even) more accessible

Daniel van Strien, Digital Curator, Living with Machines, writes:

The British Library’s digital scholarship department has made many digitised materials available to researchers. This includes a collection of digitised books created by the British Library in partnership with Microsoft. This is a collection of books that have been digitised and processed using Optical Character Recognition (OCR) software to make the text machine-readable. There is also a collection of books digitised in partnership with Google. 

Since being digitised, this collection of digitised books has been used for many different projects. This includes recent work to try and augment this dataset with genre metadata and a project using machine learning to tag images extracted from the books. The books have also served as training data for a historic language model.

This blog post will focus on two challenges of working with this dataset: size and documentation, and discuss how we’ve experimented with one potential approach to addressing these challenges. 

One of the challenges of working with this collection is its size. The OCR output is over 20GB. This poses some challenges for researchers and other interested users wanting to work with these collections. Projects like Living with Machines are one avenue in which the British Library seeks to develop new methods for working at scale. For an individual researcher, one of the possible barriers to working with a collection like this is the computational resources required to process it. 

Recently we have been experimenting with a Python library, datasets, to see if this can help make this collection easier to work with. The datasets library is part of the Hugging Face ecosystem. If you have been following developments in machine learning, you have probably heard of Hugging Face already. If not, Hugging Face is a delightfully named company focusing on developing open-source tools aimed at democratising machine learning. 

The datasets library is a tool aiming to make it easier for researchers to share and process large datasets for machine learning efficiently. Whilst this was the library’s original focus, there may also be other uses cases for which the datasets library may help make datasets held by the British Library more accessible. 

Some features of the datasets library:

  • Tools for efficiently processing large datasets 
  • Support for easily sharing datasets via a ‘dataset hub’ 
  • Support for documenting datasets hosted on the hub (more on this later). 

As a result of these and other features, we have recently worked on adding the British Library books dataset library to the Hugging Face hub. Making the dataset available via the datasets library has now made the dataset more accessible in a few different ways.

Firstly, it is now possible to download the dataset in two lines of Python code: 

Image of a line of code: "from datasets import load_dataset ds = load_dataset('blbooks', '1700_1799')"

We can also use the Hugging Face library to process large datasets. For example, we only want to include data with a high OCR confidence score (this partially helps filter out text with many OCR errors): 

Image of a line of code: "ds.filter(lambda example: example['mean_wc_ocr'] > 0.9)"

One of the particularly nice features here is that the library uses memory mapping to store the dataset under the hood. This means that you can process data that is larger than the RAM you have available on your machine. This can make the process of working with large datasets more accessible. We could also use this as a first step in processing data before getting back to more familiar tools like pandas. 

Image of a line of code: "dogs_data = ds['train'].filter(lamda example: "dog" in example['text'].lower()) df = dogs_data_to_pandas()

In a follow on blog post, we’ll dig into the technical details of datasets in some more detail. Whilst making the technical processing of datasets more accessible is one part of the puzzle, there are also non-technical challenges to making a dataset more usable. 

 

Documenting datasets 

One of the challenges of sharing large datasets is documenting the data effectively. Traditionally libraries have mainly focused on describing material at the ‘item level,’ i.e. documenting one dataset at a time. However, there is a difference between documenting one book and 100,000 books. There are no easy answers to this, but libraries could explore one possible avenue by using Datasheets. Timnit Gebru et al. proposed the idea of Datasheets in ‘Datasheets for Datasets’. A datasheet aims to provide a structured format for describing a dataset. This includes questions like how and why it was constructed, what the data consists of, and how it could potentially be used. Crucially, datasheets also encourage a discussion of the bias and limitations of a dataset. Whilst you can identify some of these limitations by working with the data, there is also a crucial amount of information known by curators of the data that might not be obvious to end-users of the data. Datasheets offer one possible way for libraries to begin more systematically commuting this information. 

The dataset hub adopts the practice of writing datasheets and encourages users of the hub to write a datasheet for their dataset. For the British library books, we have attempted to write one of these datacards. Whilst it is certainly not perfect, it hopefully begins to outline some of the challenges of this dataset and gives end-users a better sense of how they should approach a dataset. 

14 February 2022

PhD Placement on Mapping Caribbean Diasporic Networks through Correspondence

Every year the British Library host a range of PhD placement scheme projects. If you are interested in applying for one of these, the 2022 opportunities are advertised here. There are currently 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. If you would like to work with born digital collections, you may want to read last week’s Digital Scholarship blog post about two projects on enhanced curation, hybrid archives and emerging formats. However, if you are interested in Caribbean diasporic networks and want to experiment creating network analysis visualisations, then read on to find out more about the “Mapping Caribbean Diasporic Networks through correspondence (2022-ACQ-CDN)” project.

This is an exciting opportunity to be involved with the preliminary stages of a project to map the Caribbean Diasporic Network evident in the ‘Special Correspondence’ files of the Andrew Salkey Archive. This placement will be based in the Contemporary Literary and Creative Archives team at the British Library with support from Digital Scholarship colleagues. The successful candidate will be given access to a selection of correspondence files to create an item level dataset and explore the content of letters from the likes of Edward Kamau Brathwaite, C.L.R. James, and Samuel Selvon.

Photograph of Andrew Salkey
Photograph of Andrew Salkey, from the Andrew Salkey Archive, Deposit 10310. With kind permission of Jason Salkey.

The main outcome envisaged for this placement is to develop a dataset, using a sample of ten files, linking the data and mapping the correspondent’s names, location they were writing from, and dates of the correspondence in a spreadsheet. The placement student will also learn how to use the Gephi Open Graph Visualisation Platform to create a visual representation of this network, associating individuals with each other and mapping their movement across the world between the 1950s and 1990s.

Gephi is open-source software  for visualising and analysing networks, they provide a step-by-step guide to getting started, with the first step to upload a spreadsheet detailing your ‘nodes’ and ‘edges’. To show an example of how Gephi can be used, We've included an example below, which was created by previous British Library research placement student Sarah FitzGerald from the University of Sussex, using data from the Endangered Archives Programme (EAP) to create a Gephi visualisation of all EAP applications received between 2004 and 2017.

Gephi network visualisation diagram
Network visualisation of EAP Applications created by Sarah FitzGerald

In this visualisation the size of each country relates to the number of applications it features in, as country of archive, country of applicant, or both.  The colours show related groups. Each line shows the direction and frequency of application. The line always travels in a clockwise direction from country of applicant to country of archive, the thicker the line the more applications. Where the country of applicant and country of archive are the same the line becomes a loop. If you want to read more about the other visualisations that Sarah created during her project, please check out these two blog posts:

We hope this new PhD placement will offer the successful candidate the opportunity to develop their specialist knowledge through access to the extensive correspondence series in the Andrew Salkey archive, and to undertake practical research in a curatorial context by improving the accessibility of linked metadata for this collection material. This project is a vital building block in improving the Library’s engagement with this material and exploring the ways it can be accessed by a wider audience.

If you want to apply, details are available on the British Library website at https://www.bl.uk/research-collaboration/doctoral-research/british-library-phd-placement-scheme. Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to [email protected]

This post is by Digital Curator Stella Wisdom (@miss_wisdom) and Eleanor Casson (@EleCasson), Curator in Contemporary Archives and Manuscripts.

07 February 2022

New PhD Placements on Enhanced Curation: Hybrid Archives and Emerging Formats

The British Library is accepting applications for the new round of 2022 PhD Placement opportunities: there are 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. Two of the projects within the Contemporary British Collections department focus on Enhanced Curation as an approach to add to the research value of an archival object or digital publication.

Developing an enhanced curation framework for contemporary hybrid archives (2022-CB-HAC)” will outline a framework for Enhanced Curation in relation to contemporary hybrid archives. These archival collections are the record of the creative and professional lives of prominent individuals in UK society, containing both paper and digital material.  So far we have defined Enhanced Curation as the means by which the research value of these records can be enhanced through the creation, collection, and interrogation of the contextual information which surrounds them.

Luckily, we’re in a privileged position – most of our archive donors are living individuals who can illuminate their creative practice for us in real-time. Similarly, with forensic techniques, we’re capturing more data than ever before when we acquire an archive. The truly live questions are then – how can we use this position to best effect? What can we do with what we’re already collecting? What else should we be collecting? And how can we represent this data in engaging and enlightening new ways for the benefit of everyone, including our researchers and exhibition audiences?

Enhanced Curation, as we see it, is about bringing these dynamic collections to life for as many people as possible.  In approaching these questions, the chosen student will engage in a mixture of theoretical and practical work – first outlining the relevant debates and techniques in and around curation, archival science, museology and digital humanities, and then recommending a course of action for one particular hybrid personal archive. This is a collaborative exercise, though, and they will be provided with hands-on training for working with (and getting the most out of) this growing collection area by specialist curatorial staff at the Library.

Photograph of a floppy disk and its case
Floppy disk from the Will Self archive.

Collecting complex digital publications: Testing an enhanced curation method (2022-CB-EF)” focuses on the Library collection of emerging formats. Emerging formats are defined as born-digital publications whose structure, technical dependencies and highly interactive nature challenge our traditional collection methods. These publications include apps, such as the interactive adventure 80 Days, as well as digital interactive narratives, such as the examples collected in the UK Web Archive Interactive Narratives and New Media Writing Prize collections. Collection and preservation of these digital formats in their entirety might not always be possible: there are many challenges and implications in terms of technical capabilities, software and hardware dependencies, copyright restrictions and long-term solutions that are effective against technical obsolescence.

The collection and creation of contextual information is one approach to fill in the gaps and enhance curation for these digital publications. The placement student will helps us test a collection matrix for contextual information relating to emerging formats, which include – but is not limited to – webpages, interviews, reviews, blog posts and screenshots/screencast of usage of a work. These might be collected using a variety of methods (e.g. web archiving, direct transfer from the author, etc.) as well as created by the student themselves (e.g. interviews with the author, video recordings of usage, etc.) Through this placement, the student will have the opportunity to participate in a network of cultural heritage institutions concerned with the preservation of digital publications while helping develop one of the Library contemporary collections.

Photograph of a man looking at an iPad screen and reading an app
Interacting with the American Interior app on iPad.

Both PhD Placements are offered for 3 months full time, or part-time equivalent. They can be undertaken as hybrid placements (i.e. remotely, with some visits to the British Library building in London, St. Pancras), with the option of a fully remote placement for “Collecting complex digital publications: Testing an enhanced curation method”.

Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to [email protected]

This post is by Giulia Carla Rossi, Curator of Digital Publications on twitter as @giugimonogatari and Callum McKean, Digital Lead Curator, Contemporary Archives and Manuscripts.

26 January 2022

Which Came First: The Author or the Text? Wikidata and the New Media Writing Prize

Congratulations to the 2021 New Media Writing Prize (NMWP) winners, which were announced at a Bournemouth University online event recently: Joannes Truyens and collaborators (Main Prize), Melody MOU (Student Award) and Daria Donina (FIPP Journalism Award 2021). The main prize winner ‘Neurocracy’ is an experimental dystopian narrative that takes place over 10 episodes, through Omnipedia, an imagined future version of Wikipedia in 2049. So this seemed like a very apt jumping off point for today’s blog post, which discusses a recent project where we added NMWP data to Wikidata.

Screen image of Omnipediaan imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens
Omnipedia, an imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens

Note: If you wish to read ‘Neurocracy’ and are prompted for a username and password, use NewMediaWritingPrize1 password N3wMediaWritingPrize!. You can learn more about the work in this article and listen to an interview with the author in this podcast episode.

Working With Wikidata

Dr Martin Poulter describes learning how to work with Wikidata as being like learning a language. When I first heard this description, I didn’t understand: how could something so reliant on raw data be anything like the intricacies of language learning?

It turns out, Martin was completely correct.

Imagine a stack of data as slips of paper. Each slip has an individual piece of data on it: an author’s name, a publication date, a format, a title. How do you start to string this data together so that it makes sense?

One of the beautiful things about Wikidata is that it is both machine and human readable. In order for it to work this way, and for us to upload it effectively, thinking about the relationships between these slips of paper is essential.

In 2021, I had an opportunity to see what Martin was talking about when he spoke about language, as I was asked to work with a set of data about NMWP shortlisted and winning works, which the British Library has collected in the UK Web Archive. You can read more about this special collection here and here

Image of blank post-it notes and a hand with a marker pen preparing to write on one.

About the New Media Writing Prize

The New Media Writing Prize was founded in 2010 to showcase exciting and inventive stories and poetry that integrate a variety of digital formats, platforms, and media. One of the driving forces in setting up and establishing the prize was Chris Meade, director of if:book uk, a ‘think and do tank’ for exploring digital and collaborative possibilities for writers and readers. He was the lead sponsor of the if:book UK New Media Writing Prize, and the Dot Award, which he created in honour of his mother, Dorothy, and he chaired every NMWP awards evening since 2010. Very sadly Chris passed away on 13th January 2022 and the recent 2021 awards event was dedicated to Chris and his family.

Recognising the significance of the NMWP, in recent years the British Library created the New Media Writing Prize Special Collection as part of its emerging formats work. With 11 years of metadata about a born digital collection, this was an ideal data set for me to work with in order to establish a methodology for working with Wikidata uploads in the Library.

Last year I was fortunate to collaborate with Tegan Pyke, a PhD placement student in the Contemporary British Publications Collections team, supervised by Guilia Carla Rossi, Curator for Digital Publications. Tegan's project examined the digital preservation challenges of complex digital objects, developing and testing a quality assurance process for examining works in the NMWP collection. If you want to read more about this project, a report is available here.  For the Wikidata work Tegan and Giulia provided two spreadsheets of data (or slips of paper!), and my aim was to upload linked data that covered the authors, their works, and the award itself - who had been shortlisted, who had won, and when.

Simple, right?

Getting Started

I thought so - until I began to structure my uploads. There were some key questions that needed to be answered about how these relationships would be built, and I needed to start somewhere. Should I upload the authors or the texts first? Should I go through the prize year by year, or be led by other information? And what about texts with multiple authors?

Suddenly it all felt a bit more intimidating!

I was fortunate to attend some Wikidata training run by Wikimedia UK late last year. Martin was our trainer, and one piece of advice he gave us was indispensable: if you’re not sure where to start, literally write it out with pencil and paper. What is the relationship you’re trying to show, in its simplest form? This is where language framing comes in especially useful: thinking about the basic sentence structures I’d learned in high school German became vital.

Image shows four simple sentences: Christine Wilks won NMWP in 2010. Christine Wilks wrote Underbelly. Underbelly won NMWP in 2010. NMWP was won by Christine Wilks in 2010. Christine is circled in green, NMWP in people, and Underbelly in yellow.  QIDs are listed: Q108810306, highlighted in green Q108459688, highlighted in purple Q109237591, highlighted in yellow  Properties are listed: P166, highlighted in blue P800, highlighted in turquoise P585, highlighted in orange
Image by the author, notes own.

The Numbers Bit

You can see from this image how the framework develops: specific items, like nouns, are given identification numbers when they become a Wikidata item. This is their QID. The relationships between QIDs, sort of like the adjectives and verbs, are defined as properties and have P numbers. So Christine Wilks is now Q108810306, and her relationship to her work, Underbelly, or Q109237591, is defined with P800 which means ‘notable work’.

Q108810306 - P800 - Q109237591

You can upload this relationship using the visual editor on Wikidata, by clicking fields and entering data. If you have a large amount of information (remember those slips of paper!) tools like QuickStatements become very useful. Dominic Kane blogged about his experience of this system during his British Library student placement project in 2021.

The intricacies of language are also very important on Wikidata. The nuance and inference we can draw from specific terms is important. The concept of ‘winning’ an award became a subject of semantic debate: the taxonomy of Wikidata advises that we use ‘award received’ in the case of a literary prize, as it’s less of an active sporting competition than something like a marathon or an athletic event.

Asking Questions of the Data

Ultimately we upload information to Wikidata so that it can be queried. Querying uses SPAQRL, a language which allows users to draw information and patterns from vast swathes of data. Querying can be complex: to go back to the language analogy, you have to phrase the query in precisely the right way to get the information you want.

One of the lessons I learned during the NMWP uploads was the importance of a unifying property. Users will likely query this data with a view to surveying results and finding patterns. Each author and work, therefore, needed to be linked to the prize and the collection itself (pictured above). By adding this QID to the property P6379 (‘has works in the collection’), we create a web of data that links every shortlisted author over the 11 year time period.

Getting Started

To have a look at some of the NMWP data, here are some queries I prepared earlier. Please note that data from the 2021 competition has not yet been uploaded!

Authors who won NMWP

Works that won NMWP

Authors nominated for NMWP

Works nominated for NMWP

If you fancy trying some queries but don’t know where to start, I recommend these tutorials:

Tutorials

Resources About SPARQL

This post is by Wikimedian in Residence Dr Lucy Hinnie (@BL_Wikimedian

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs