Digital scholarship blog

Enabling innovative research with British Library digital collections

Introduction

Tracking exciting developments at the intersection of libraries, scholarship and technology. Read more

20 April 2022

Importing images into Zooniverse with a IIIF manifest: introducing an experimental feature

Digital Curator Dr Mia Ridge shares news from a collaboration between the British Library and Zooniverse that means you can more easily create crowdsourcing projects with cultural heritage collections. There's a related blog post on Zooniverse, Fun with IIIF.

IIIF manifests - text files that tell software how to display images, sound or video files alongside metadata and other information about them - might not sound exciting, but by linking to them, you can view and annotate collections from around the world. The IIIF (International Image Interoperability Framework) standard makes images (or audio, video or 3D files) more re-usable - they can be displayed on another site alongside the original metadata and information provided by the source institution. If an institution updates a manifest - perhaps adding information from updated cataloguing or crowdsourcing - any sites that display that image automatically gets the updated metadata.

Playbill showing the title after other large text
Playbill showing the title after other large text

We've posted before about how we used IIIF manifests as the basis for our In the Spotlight crowdsourced tasks on LibCrowds.com. Playbills are great candidates for crowdsourcing because they are hard to transcribe automatically, and the layout and information present varies a lot. Using IIIF meant that we could access images of playbills directly from the British Library servers without needing server space and extra processing to make local copies. You didn't need technical knowledge to copy a manifest address and add a new volume of playbills to In the Spotlight. This worked well for a couple of years, but over time we'd found it difficult to maintain bespoke software for LibCrowds.

When we started looking for alternatives, the Zooniverse platform was an obvious option. Zooniverse hosts dozens of historical or cultural heritage projects, and hundreds of citizen science projects. It has millions of volunteers, and a 'project builder' that means anyone can create a crowdsourcing project - for free! We'd already started using Zooniverse for other Library crowdsourcing projects such as Living with Machines, which showed us how powerful the platform can be for reaching potential volunteers. 

But that experience also showed us how complicated the process of getting images and metadata onto Zooniverse could be. Using Zooniverse for volumes of playbills for In the Spotlight would require some specialist knowledge. We'd need to download images from our servers, resize them, generate a 'manifest' list of images and metadata, then upload it all to Zooniverse; and repeat that for each of the dozens of volumes of digitised playbills.

Fast forward to summer 2021, when we had the opportunity to put a small amount of funding into some development work by Zooniverse. I'd already collaborated with Sam Blickhan at Zooniverse on the Collective Wisdom project, so it was easy to drop her a line and ask if they had any plans or interest in supporting IIIF. It turns out they had, but hadn't had the resources or an interested organisation necessary before.

We came up with a brief outline of what the work needed to do, taking the ability to recreate some of the functionality of In the Spotlight on Zooniverse as a goal. Therefore, 'the ability to add subject sets via IIIF manifest links' was key. ('Subject set' is Zooniverse-speak for 'set of images or other media' that are the basis of crowdsourcing tasks.) And of course we wanted the ability to set up some crowdsourcing tasks with those items… The Zooniverse developer, Jim O'Donnell, shared his work in progress on GitHub, and I was very easily able to set up a test project and ask people to help create sample data for further testing. 

If you have a Zooniverse project and a IIIF address to hand, you can try out the import for yourself: add 'subject-sets/iiif?env=production' to your project builder URL. e.g. if your project is number #xxx then the URL to access the IIIF manifest import would be https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production

Paste a manifest URL into the box. The platform parses the file to present a list of metadata fields, which you can flag as hidden or visible in the subject viewer (public task interface). When you're happy, you can click a button to upload the manifest as a new subject set (like a folder of items), and your images are imported. (Don't worry if it says '0 subjects).

 

Screenshot of manifest import screen
Screenshot of manifest import screen

You can try out our live task and help create real data for testing ingest processes at ​​https://frontend.preview.zooniverse.org/projects/bldigital/in-the-spotlight/classify

This is a very brief introduction, with more to come on managing data exports and IIIF annotations once you've set up, tested and launched a crowdsourced workflow (task). We'd love to hear from you - how might this be useful? What issues do you foresee? How might you want to expand or build on this functionality? Email [email protected] or tweet @mia_out @LibCrowds. You can also comment on GitHub https://github.com/zooniverse/Panoptes-Front-End/pull/6095 or https://github.com/zooniverse/iiif-annotations

Digital work in libraries is always collaborative, so I'd like to thank British Library colleagues in Finance, Procurement, Technology, Collection Metadata Services and various Collections departments; the Zooniverse volunteers who helped test our first task and of course the Zooniverse team, especially Sam, Jim and Chris for their work on this.

 

12 April 2022

Making British Library collections (even) more accessible

Daniel van Strien, Digital Curator, Living with Machines, writes:

The British Library’s digital scholarship department has made many digitised materials available to researchers. This includes a collection of digitised books created by the British Library in partnership with Microsoft. This is a collection of books that have been digitised and processed using Optical Character Recognition (OCR) software to make the text machine-readable. There is also a collection of books digitised in partnership with Google. 

Since being digitised, this collection of digitised books has been used for many different projects. This includes recent work to try and augment this dataset with genre metadata and a project using machine learning to tag images extracted from the books. The books have also served as training data for a historic language model.

This blog post will focus on two challenges of working with this dataset: size and documentation, and discuss how we’ve experimented with one potential approach to addressing these challenges. 

One of the challenges of working with this collection is its size. The OCR output is over 20GB. This poses some challenges for researchers and other interested users wanting to work with these collections. Projects like Living with Machines are one avenue in which the British Library seeks to develop new methods for working at scale. For an individual researcher, one of the possible barriers to working with a collection like this is the computational resources required to process it. 

Recently we have been experimenting with a Python library, datasets, to see if this can help make this collection easier to work with. The datasets library is part of the Hugging Face ecosystem. If you have been following developments in machine learning, you have probably heard of Hugging Face already. If not, Hugging Face is a delightfully named company focusing on developing open-source tools aimed at democratising machine learning. 

The datasets library is a tool aiming to make it easier for researchers to share and process large datasets for machine learning efficiently. Whilst this was the library’s original focus, there may also be other uses cases for which the datasets library may help make datasets held by the British Library more accessible. 

Some features of the datasets library:

  • Tools for efficiently processing large datasets 
  • Support for easily sharing datasets via a ‘dataset hub’ 
  • Support for documenting datasets hosted on the hub (more on this later). 

As a result of these and other features, we have recently worked on adding the British Library books dataset library to the Hugging Face hub. Making the dataset available via the datasets library has now made the dataset more accessible in a few different ways.

Firstly, it is now possible to download the dataset in two lines of Python code: 

Image of a line of code: "from datasets import load_dataset ds = load_dataset('blbooks', '1700_1799')"

We can also use the Hugging Face library to process large datasets. For example, we only want to include data with a high OCR confidence score (this partially helps filter out text with many OCR errors): 

Image of a line of code: "ds.filter(lambda example: example['mean_wc_ocr'] > 0.9)"

One of the particularly nice features here is that the library uses memory mapping to store the dataset under the hood. This means that you can process data that is larger than the RAM you have available on your machine. This can make the process of working with large datasets more accessible. We could also use this as a first step in processing data before getting back to more familiar tools like pandas. 

Image of a line of code: "dogs_data = ds['train'].filter(lamda example: "dog" in example['text'].lower()) df = dogs_data_to_pandas()

In a follow on blog post, we’ll dig into the technical details of datasets in some more detail. Whilst making the technical processing of datasets more accessible is one part of the puzzle, there are also non-technical challenges to making a dataset more usable. 

 

Documenting datasets 

One of the challenges of sharing large datasets is documenting the data effectively. Traditionally libraries have mainly focused on describing material at the ‘item level,’ i.e. documenting one dataset at a time. However, there is a difference between documenting one book and 100,000 books. There are no easy answers to this, but libraries could explore one possible avenue by using Datasheets. Timnit Gebru et al. proposed the idea of Datasheets in ‘Datasheets for Datasets’. A datasheet aims to provide a structured format for describing a dataset. This includes questions like how and why it was constructed, what the data consists of, and how it could potentially be used. Crucially, datasheets also encourage a discussion of the bias and limitations of a dataset. Whilst you can identify some of these limitations by working with the data, there is also a crucial amount of information known by curators of the data that might not be obvious to end-users of the data. Datasheets offer one possible way for libraries to begin more systematically commuting this information. 

The dataset hub adopts the practice of writing datasheets and encourages users of the hub to write a datasheet for their dataset. For the British library books, we have attempted to write one of these datacards. Whilst it is certainly not perfect, it hopefully begins to outline some of the challenges of this dataset and gives end-users a better sense of how they should approach a dataset. 

18 March 2022

Looking back at LibCrowds: surveying our participants

'In the Spotlight' is a crowdsourcing project from the British Library that aims to make digitised historical playbills more discoverable, while also encouraging people to closely engage with this otherwise less accessible collection. Digital Curator Dr Mia Ridge writes...

If you follow our @LibCrowds account on twitter, you might have noticed that we've been working on refreshed versions of our In the Spotlight tasks on Zooniverse. That's part of a small project to enable the use of IIIF manifests on Zooniverse - in everyday language, it means that many, many more digitised items can form the basis of crowdsourcing tasks in the Zooniverse Project Builder, and In the Spotlight is the first project to use this new feature. Along with colleagues in Printed Heritage and BL Labs, I've been looking at our original Pybossa-based LibCrowds site to plan a 'graceful ending' for first phase of the project on LibCrowds.com.

As part of our work documenting and archiving the original LibCrowds site, I'm delighted to share summary results from a 2018 survey of In the Spotlight participants, now published on the British library's Research Repository: https://doi.org/10.23636/w4ee-yc34. Our thanks go to Susan Knight, Customer Insight Coordinator, for her help with the survey.

The survey was designed to help us understand who In the Spotlight participants were, and to help us prioritise work on the project. The 22 question survey was based on earlier surveys run by the Galaxy Zoo and Art UK Tagger projects, to allow comparison with other crowdsourcing projects, and to contribute to our understanding of crowdsourcing in cultural heritage more broadly. It was open to anyone who had contributed to the British Library's In the Spotlight project for historical playbills. The survey was distributed to LibCrowds newsletter subscribers, on the LibCrowds community forum and on social media.

Some headline findings from our survey include:

  • Respondents were most likely to be a woman with a Masters degree, in full-time employment, in London or Southeast UK, who contributes in a break between other tasks or 'whenever they have spare time'.
  • 76% of respondents were motivated by contributing to historical or performance research

Responses to the question 'What was it about this project which caused you to spend more time than intended on it?':

  • Easy to do
  • It's so entertaining
  • Every time an entry is completed you are presented with another item which is interesting and
  • illuminating which provides a continuous temptation regarding what you might discover next
  • simplicity
  • A bit of competitiveness about the top ten contributors but also about contributing something useful
  • I just got carried away with the fun
  • It's so easy to complete
  • Easy to want to do just a few more
  • Addiction
  • Felt I could get through more tasks
  • Just getting engrossed
  • It can be a bit addictive!
  • It's so easy to do that it's very easy to get carried away.
  • interested in the [material]

The summary report contains more rich detail, so go check it out!

 

Crowdsourcing projects from the British Library. 2,969 Volunteers. 265,648 Contributions. 175 Projects
Detail of the front page of libcrowds.com; Crowdsourcing projects from the British Library. 2,969 Volunteers. 265,648 Contributions. 175 Projects

16 March 2022

Getting Ready for Black Theatre and the Archive: Making Women Visible, 1900-1950

Following on from last week’s post, have you signed up for our Wikithon already? If you are interested in Black theatre history and making women visible, and want to learn how to edit Wikipedia, please do join us online, on Monday 28th March, from 10am to 1.30pm BST, over Zoom.

Remember the first step is to book your place here, via Eventbrite.

Finding Sources in The British Newspaper Archive

We are grateful to the British Newspaper Archive and Findmypast for granting our participants access to their resources on the day of the event. If you’d like to learn more about this Archive beforehand, there are some handy guides to how to do this below.

Front page of the British Newspaper Archive website, showing the search bar and advertising Findmypast.
The British Newspaper Archive Homepage

I used a quick British Newspaper Archive Search to look for information on Una Marson, a playwright and artist whose work is very important in the timeframe of this Wikithon (1900-1950). As you can see, there were over 1000 results. I was able to view images of Una at gallery openings, art exhibitions and read all about her work.

Page of search results on the British Newspaper Archive, looking for articles about Una Marson.
A page of results for Una Marson on the British Newspaper Archive

Findmypast focuses more on legal records of people, living and dead. It’s a dream website for genealogists and those interested in social history. They’ve recently uploaded the results of the 1921 census, so there is a lot of material about people’s lives in the early 20th century.

Image of the landing page for the 1921 Census of England and Wales on Findmypast.
The Findmypast 1921 Census Homepage.

 

Here’s how to get started with Findmypast in 15 minutes, using a series of ‘how to’ videos. This handy blog post offers a beginner's guide on how to search Findmypast's family records, and you can always use  Findmypast’s help centre to seek answers to frequently asked questions.

Wikipedia Preparation

If you’d like to get a head start, you can download and read our handy guide to setting up your Wikipedia account, which you can access  here. There is also advice available on creating your account, Wikipedia's username policy and how to create your user page.

The Wikipedia logo, a white globe made of jigsaw pieces with letters and symbols on them in black.
The Wikipedia Logo, Nohat (concept by Paullusmagnus), CC BY-SA 3.0, via Wikimedia Commons

Once you have done that, or if you already have a Wikipedia account, please join our event dashboard and go through the introductory exercises, which cover:

  • Wikipedia Essentials
  • Editing Basics
  • Evaluating Articles and Sources
  • Contributing Images and Media Files
  • Sandboxes and Mainspace
  • Sources and Citations
  • Plagiarism

These are all short exercises that will help familiarise you with Wikipedia and its processes. Don’t have time to do them? We get it, and that’s totally fine - we’ll cover the basics on the day too!

You may want to verify your Wikipedia account - this function exists to make sure that people are contributing responsibly to Wikipedia. The easiest and swiftest way to verify your account is to do 10 small edits. You could do this by correcting typos or adding in missing dates. However, another way to do this is to find articles where citations are needed, and add them via Citation Hunt. For further information on adding citations, watching this video may be useful.

Happier with an asynchronous approach?

If you cannot join the Zoom event on Monday 28th March, but would like to contribute, please do check out and sign up to our dashboard. The online dashboard training exercises will be an excellent starting point. From there, all of your edits and contributions will be registered, and you can be proud of yourself for making the world of Wikipedia a better place, in your own time.

This post is by Wikimedian in Residence Dr Lucy Hinnie (@BL_Wikimedian).

14 March 2022

The Lotus Sutra Manuscripts Digitisation Project: the collaborative work between the Heritage Made Digital team and the International Dunhuang Project team

Digitisation has become one of the key tasks for the curatorial roles within the British Library. This is supported by two main pillars: the accessibility of the collection items to everybody around the world and the preservation of unique and sometimes, very fragile, items. Digitisation involves many different teams and workflow stages including retrieval, conservation, curatorial management, copyright assessment, imaging, workflow management, quality control, and the final publication to online platforms.

The Heritage Made Digital (HMD) team works across the Library to assist with digitisation projects. An excellent example of the collaborative nature of the relationship between the HMD and International Dunhuang Project (IDP) teams is the quality control (QC) of the Lotus Sutra Project’s digital files. It is crucial that images meet the quality standards of the digital process. As a Digitisation Officer in HMD, I am in charge of QC for the Lotus Sutra Manuscripts Digitisation Project, which is currently conserving and digitising nearly 800 Chinese Lotus Sutra manuscripts to make them freely available on the IDP website. The manuscripts were acquired by Sir Aurel Stein after they were discovered  in a hidden cave in Dunhuang, China in 1900. They are thought to have been sealed there at the beginning of the 11th century. They are now part of the Stein Collection at the British Library and, together with the international partners of the IDP, we are working to make them available digitally.

The majority of the Lotus Sutra manuscripts are scrolls and, after they have been treated by our dedicated Digitisation Conservators, our expert Senior Imaging Technician Isabelle does an outstanding job of imaging the fragile manuscripts. My job is then to prepare the images for publication online. This includes checking that they have the correct technical metadata such as image resolution and colour profile, are an accurate visual representation of the physical object and that the text can be clearly read and interpreted by researchers. After nearly 1000 years in a cave, it would be a shame to make the manuscripts accessible to the public for the first time only to be obscured by a blurry image or a wayward piece of fluff!

With the scrolls measuring up to 13 metres long, most are too long to be imaged in one go. They are instead shot in individual panels, which our Senior Imaging Technicians digitally “stitch” together to form one big image. This gives online viewers a sense of the physical scroll as a whole, in a way that would not be possible in real life for those scrolls that are more than two panels in length unless you have a really big table and a lot of specially trained people to help you roll it out. 

Photo showing the three individual panels of Or.8210S/1530R with breaks in between
Or.8210/S.1530: individual panels
Photo showing the three panels of Or.8210S/1530R as one continuous image
Or.8210/S.1530: stitched image

 

This post-processing can create issues, however. Sometimes an error in the stitching process can cause a scroll to appear warped or wonky. In the stitched image for Or.8210/S.6711, the ruled lines across the top of the scroll appeared wavy and misaligned. But when I compared this with the images of the individual panels, I could see that the lines on the scroll itself were straight and unbroken. It is important that the digital images faithfully represent the physical object as far as possible; we don’t want anyone thinking these flaws are in the physical item and writing a research paper about ‘Wonky lines on Buddhist Lotus Sutra scrolls in the British Library’. Therefore, I asked the Senior Imaging Technician to restitch the images together: no more wonky lines. However, we accept that the stitched images cannot be completely accurate digital surrogates, as they are created by the Imaging Technician to represent the item as it would be seen if it were to be unrolled fully.

 

Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned
Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned

 

Similarly, our Senior Imaging Technician applies ‘digital black’ to make the image background a uniform colour. This is to hide any dust or uneven background and ensure the object is clear. If this is accidentally overused, it can make it appear that a chunk has been cut out of the scroll. Luckily this is easy to spot and correct, since we retain the unedited TIFFs and RAW files to work from.

 

Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll. It appears to have a large black line down the centre of the image.
Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll

 

Sometimes the scrolls are wonky, or dirty or incomplete. They are hundreds of years old, and this is where it can become tricky to work out whether there is an issue with the images or the scroll itself. The stains, tears and dirt shown in the images below are part of the scrolls and their material history. They give clues to how the manuscripts were made, stored, and used. This is all of interest to researchers and we want to make sure to preserve and display these features in the digital versions. The best part of my job is finding interesting things like this. The fourth image below shows a fossilised insect covering the text of the scroll!

 

Black stains: Or.8210/S.2814, panel 9
Black stains: Or.8210/S.2814, panel 9
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Fossilised insect covering text: Or.8210/S.6457, panel 5
Fossilised insect covering text: Or.8210/S.6457, panel 5

 

We want to minimise the handling of the scrolls as much as possible, so we will only reshoot an image if it is absolutely necessary. For example, I would ask a Senior Imaging Technician to reshoot an image if debris is covering the text and makes it unreadable - but only after inspecting the scroll to ensure it can be safely removed and is not stuck to the surface. However, if some debris such as a small piece of fluff, paper or hair, appears on the scroll’s surface but is not obscuring any text, then I would not ask for a reshoot. If it does not affect the readability of the text, or any potential future OCR (Optical Character Recognition) or handwriting analysis, it is not worth the risk of damage that could be caused by extra handling. 

Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.
Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.

 

These are a few examples of the things to which the HMD Digitisation Officers pay close attention during QC. Only through this careful process, can we ensure that the digital images accurately reflect the physicality of the scrolls and represent their original features. By developing a QC process that applies the best techniques and procedures, working to defined standards and guidelines, we succeed in making these incredible items accessible to the world.

Read more about Lotus Sutra Project here: IDP Blog

IDP website: IDP.BL.UK

And IDP twitter: @IDP_UK

Dr Francisco Perez-Garcia

Digitisation Officer, Heritage Made Digital: Asian and African Collections

Follow us @BL_MadeDigital

10 March 2022

Scoping the connections between trusted arts and humanities data repositories

CONNECTED: Connecting trusted Arts and Humanities data repositories is a newly funded activity, supported by AHRC. It is led by the British Library, with the Archaeology Data Service and the Oxford Text Archive as co-investigators, and is supported by consultants from MoreBrains Cooperative.The CONNECTED team believes that improving discovery and curation of heritage and emergent content types in the arts and humanities will increase the impact of cultural resources, and enhance equity. Great work is already being done on discovery services for the sector, so we decided to look upstream, and focus on facilitating repository and archive deposit.

The UK boasts a dynamic institutional repository environment in the HE sector, as well as a range of subject- or field-specific repositories. With a distributed repository landscape now firmly established, challenges and inefficiencies still remain that reduce its impact. These include issues around discovery and access, but also questions around interoperability, the relationship of specialised vs general infrastructures, and potential duplication of effort from an author/depositor perspective. Greater coherence and interoperability will effectively unite different trusted repository services to form a resilient distributed data service, which can grow over time as new individual services are required and developed. Alongside the other projects funded as part of ‘Scoping future data services for the arts and humanities’, CONNECTED will help to deliver this unified network.

As practice in the creative arts becomes more digital and the digital humanities continue to thrive, the diversity of ways in which this research is expressed continues to grow. Researchers are increasingly able to combine artefacts, documents, and materials in new and innovative ways; practice-based research in the arts is creating a diverse range of (often complex) outputs, creating new curation and discovery needs; and heritage collections often contain artefacts with large amounts of annotation and commentary amassed over years or centuries, across multiple formats, and with rich contextual information. This expansion is already exposing the limitations of our current information systems, with the potential for vital context and provenance to become invisible. Without additional, careful, future-proofing, the risks of information loss and limits on access will only expand. In addition, metadata creation, deposit, preservation, and discovery strategies should, therefore, be tailored to meet the very different needs of the arts and humanities.

A number of initiatives are aimed at improving interoperability between metadata sources in ways that are more oriented towards the needs of the arts and humanities. Drawing these together with the insights to be gained from the abilities (and limitations) of bibliographic and data-centric metadata and discovery systems, will help to generate robust services in the complex, evolving landscape of arts and humanities research and creation. 

The CONNECTED project will assemble experts, practitioners, and researchers to map current gaps in the content curation and discovery ecosystem and weave together the strengths and potentials of a range of platforms, standards, and technologies in the service of the arts and humanities community. Our activities will run until the end of May, and will comprise three phases:

Phase 1 - Discovery

We will focus on repository or archive deposit as a foundation for the discovery and preservation of diverse outputs, and also as a way to help capture the connections between those objects and the commentary, annotation, and other associated artefacts. 

A data service for the arts and humanities must be developed with researcher needs as a priority, so the project team will engage in a series of semi-structured interviews with a variety of stakeholders including researchers, librarians, curators, and information technologists. The interviews will explore the following ideas:

  • What do researchers need when engaging in discovery of both heritage materials and new outputs?
  • Are there specific needs that relate to different types of content or use-cases? For example, research involving multimedia or structured information processing at scale?
  • What can the current infrastructure support, and where are the gaps between what we have and what we need?
  • What are the feasible technical approaches to transform information discovery?

Phase 2 - Data service programme scoping and planning

The findings from phase 1 will be synthesised using a commercial product strategy approach known as a canvas analysis. Based on the initial impressions from the semi-structured interviews, it is likely that an agile, product, or value proposition canvas will be used to synthesise the findings and structure thinking so that a coherent and robust strategy can be developed. Outputs from the strategy canvas exercise will then be applied to a fully costed and scoped product roadmap and budget for a national data deposit service for the arts and humanities.

Phase 3 - Scoping a unified archiving solution

Building on the partnerships and conversations from the previous phases, the feasibility of a unified ‘deposit switchboard’ will be explored. The purpose of such a switchboard is to enable researchers, curators, and creators to easily deposit items in the most appropriate repository or archive in their field for the object type they are uploading. Using insights gained from the landscaping interviews in phase 1, the team will identify potential pathways to developing a routing service for channelling content to the most appropriate home.

We will conclude with a virtual community workshop to explore the challenges and desirability of the switchboard approach, with a special focus on the benefits this could bring to the uploader of new content and resources.

This is an ambitious project, through which we hope to deliver:

  • A fully costed and scoped technical and organisational roadmap to build the required components and framework for the National Collection
  • Improved usage of resources in the wider GLAM and institutional network, including of course the Archaeology Data Service, The British Library's Shared Research Repository, and the Oxford Text Archive
  • Steps towards a truly community-governed data infrastructure for the arts and humanities as part of the National Collection

As a result of this work, access to UK cultural heritage and outputs will be accelerated and simplified, the impact of the arts and humanities will be enhanced, and we will help the community to consolidate the UK's position as a global leader in digital humanities and infrastructure.

This post is from Rachael Kotarski (@RachPK), Principal Investigator for CONNECTED, and Josh Brown from MoreBrains.

08 March 2022

Black Theatre and the Archive: Making Women Visible, 1900-1950

On International Women’s Day 2022 we are pleased to announce our upcoming online Wikithon event, Black Theatre and the Archive: Making Women Visible, 1900-1950, which will take place on Monday 28th March, 10:00 – 13:30 BST. Working with one of the Library’s notable collections, the Lord Chamberlain’s Plays, we will be looking to increase the visibility and presence of Black women on Wikipedia, with a specific focus on twentieth century writers and performers of works in the collection, such as Una Marson and Pauline Henriques, alongside others who are as yet lesser-known than their male counterparts.

The Lord Chamberlain’s Plays are the largest single manuscript collection held by the Library. Between 1824 and 1968 all plays in the UK were submitted to the Lord Chamberlain’s Office for licensing. This period includes two important acts of Parliament related to theatre in the UK: the Stage Licensing Act of 1737 and the Theatres Act of 1843. You can watch Dr Alexander Lock, Curator of Modern Archives and Manuscripts at the British Library, discussing this collection with Giuliano Levato who runs the People of Theatre vlog in this video below.

The Lord Chamberlain’s Plays with British Library Curator Dr Alexander Lock on People of Theatre - The Vlog for Theatregoers

We are delighted to be collaborating with Professor Kate Dossett of the University of Leeds. Kate is currently working on ‘Black Cultural Archives & the Making of Black Histories: Archives of Surveillance and Black Transnational Theatre’, a project supported by an Independent Social Research Foundation Fellowship and a Fellowship from our very own Eccles Centre. Her work is crucial in shining light on the understudied area of Black theatre history in the first half of the twentieth century .

 A woman and a man sit behind a desk with an old-fashioned microphone that says ‘BBC’. The woman is on the left, holding a script, looking at the microphone. The man is also holding a script and looking away.(1)
Pauline Henriques and Sam Sevlon in 1952. Image: BBC UK Government, Public domain, via Wikimedia Commons.

Our wikithon is open to everyone: you can register for free here. We will be blogging in the run up to the event with details on how to prepare. We are thankful to be supported by the British Newspaper Archive and FindMyPast, who will provide registered participants access to their online resources for the day of the event. You can also access 1 million free newspaper pages at any time, as detailed in this blog post.  

We hope to consider a variety of questions, such as what a timeline of Black British theatre history looks like, who gets to decide the parameters, and how we can make women more visible in these studies? We will think about the traditions shaping Black British theatre and the collections that help us understand this field of study, such as the Lord Chamberlain’s Plays. This kind of hands-on historical research helps us to better represent marginalised voices in the present day. 

It will be the first of a series of three Wikithons exploring different elements of the Lord Chamberlain’s Plays. Throughout 2022 we will host another two Wikithons. Please follow this blog, our twitter @BL_DigiSchol and keep an eye on our Wiki Project Page for updates about these.

Art + Feminism Barnstar: a black and white image of a fist holding a paintbrush in front of a green star.
Art + Feminism Barnstar, by Ilotaha13, (CC BY-SA 4.0)

We are running this workshop as part of the Art + Feminism Wiki movement, with an aim to expand and amplify knowledge produced by and about Black women. As they state in their publicity materials:

Women make up only 19% of biographies on English Wikipedia, and women of colour even fewer. Wikipedia's gender trouble is well-documented: in a 2011 survey 2010 UNU-MERIT Survey, the Wikimedia Foundation found that less than 10% of its contributors identify as female; more recent research [such as the] 2013 Benjamin Mako Hill survey points to 16% globally and 22% in the US. The data relative to trans and non-binary editors is basically non-existent. That's a big problem. While the reasons for the gender gap are up for debate, the practical effect of this disparity is not: gaps in participation create gaps in content.

We want to combat this imbalance directly. As a participant at this workshop, you will receive training on creating and editing Wikipedia articles to communicate the central role played by Black women in British theatre making between 1900 and 1950. You will also be invited to explore resources that can enable better citation justice for women of colour knowledge producers and greater awareness of archive collections documenting Black British histories. With expert support from Wikimedians and researchers alike, this is a great opportunity to improve Wikipedia for the better.

This post is by Wikimedian in Residence Dr Lucy Hinnie (@BL_Wikimedian) and Digital Curator Stella Wisdom (@miss_wisdom).

14 February 2022

PhD Placement on Mapping Caribbean Diasporic Networks through Correspondence

Every year the British Library host a range of PhD placement scheme projects. If you are interested in applying for one of these, the 2022 opportunities are advertised here. There are currently 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. If you would like to work with born digital collections, you may want to read last week’s Digital Scholarship blog post about two projects on enhanced curation, hybrid archives and emerging formats. However, if you are interested in Caribbean diasporic networks and want to experiment creating network analysis visualisations, then read on to find out more about the “Mapping Caribbean Diasporic Networks through correspondence (2022-ACQ-CDN)” project.

This is an exciting opportunity to be involved with the preliminary stages of a project to map the Caribbean Diasporic Network evident in the ‘Special Correspondence’ files of the Andrew Salkey Archive. This placement will be based in the Contemporary Literary and Creative Archives team at the British Library with support from Digital Scholarship colleagues. The successful candidate will be given access to a selection of correspondence files to create an item level dataset and explore the content of letters from the likes of Edward Kamau Brathwaite, C.L.R. James, and Samuel Selvon.

Photograph of Andrew Salkey
Photograph of Andrew Salkey, from the Andrew Salkey Archive, Deposit 10310. With kind permission of Jason Salkey.

The main outcome envisaged for this placement is to develop a dataset, using a sample of ten files, linking the data and mapping the correspondent’s names, location they were writing from, and dates of the correspondence in a spreadsheet. The placement student will also learn how to use the Gephi Open Graph Visualisation Platform to create a visual representation of this network, associating individuals with each other and mapping their movement across the world between the 1950s and 1990s.

Gephi is open-source software  for visualising and analysing networks, they provide a step-by-step guide to getting started, with the first step to upload a spreadsheet detailing your ‘nodes’ and ‘edges’. To show an example of how Gephi can be used, We've included an example below, which was created by previous British Library research placement student Sarah FitzGerald from the University of Sussex, using data from the Endangered Archives Programme (EAP) to create a Gephi visualisation of all EAP applications received between 2004 and 2017.

Gephi network visualisation diagram
Network visualisation of EAP Applications created by Sarah FitzGerald

In this visualisation the size of each country relates to the number of applications it features in, as country of archive, country of applicant, or both.  The colours show related groups. Each line shows the direction and frequency of application. The line always travels in a clockwise direction from country of applicant to country of archive, the thicker the line the more applications. Where the country of applicant and country of archive are the same the line becomes a loop. If you want to read more about the other visualisations that Sarah created during her project, please check out these two blog posts:

We hope this new PhD placement will offer the successful candidate the opportunity to develop their specialist knowledge through access to the extensive correspondence series in the Andrew Salkey archive, and to undertake practical research in a curatorial context by improving the accessibility of linked metadata for this collection material. This project is a vital building block in improving the Library’s engagement with this material and exploring the ways it can be accessed by a wider audience.

If you want to apply, details are available on the British Library website at https://www.bl.uk/research-collaboration/doctoral-research/british-library-phd-placement-scheme. Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to [email protected]

This post is by Digital Curator Stella Wisdom (@miss_wisdom) and Eleanor Casson (@EleCasson), Curator in Contemporary Archives and Manuscripts.