Digital scholarship blog

Enabling innovative research with British Library digital collections

254 posts categorized "Data"

12 April 2022

Making British Library collections (even) more accessible

Daniel van Strien, Digital Curator, Living with Machines, writes:

The British Library’s digital scholarship department has made many digitised materials available to researchers. This includes a collection of digitised books created by the British Library in partnership with Microsoft. This is a collection of books that have been digitised and processed using Optical Character Recognition (OCR) software to make the text machine-readable. There is also a collection of books digitised in partnership with Google. 

Since being digitised, this collection of digitised books has been used for many different projects. This includes recent work to try and augment this dataset with genre metadata and a project using machine learning to tag images extracted from the books. The books have also served as training data for a historic language model.

This blog post will focus on two challenges of working with this dataset: size and documentation, and discuss how we’ve experimented with one potential approach to addressing these challenges. 

One of the challenges of working with this collection is its size. The OCR output is over 20GB. This poses some challenges for researchers and other interested users wanting to work with these collections. Projects like Living with Machines are one avenue in which the British Library seeks to develop new methods for working at scale. For an individual researcher, one of the possible barriers to working with a collection like this is the computational resources required to process it. 

Recently we have been experimenting with a Python library, datasets, to see if this can help make this collection easier to work with. The datasets library is part of the Hugging Face ecosystem. If you have been following developments in machine learning, you have probably heard of Hugging Face already. If not, Hugging Face is a delightfully named company focusing on developing open-source tools aimed at democratising machine learning. 

The datasets library is a tool aiming to make it easier for researchers to share and process large datasets for machine learning efficiently. Whilst this was the library’s original focus, there may also be other uses cases for which the datasets library may help make datasets held by the British Library more accessible. 

Some features of the datasets library:

  • Tools for efficiently processing large datasets 
  • Support for easily sharing datasets via a ‘dataset hub’ 
  • Support for documenting datasets hosted on the hub (more on this later). 

As a result of these and other features, we have recently worked on adding the British Library books dataset library to the Hugging Face hub. Making the dataset available via the datasets library has now made the dataset more accessible in a few different ways.

Firstly, it is now possible to download the dataset in two lines of Python code: 

Image of a line of code: "from datasets import load_dataset ds = load_dataset('blbooks', '1700_1799')"

We can also use the Hugging Face library to process large datasets. For example, we only want to include data with a high OCR confidence score (this partially helps filter out text with many OCR errors): 

Image of a line of code: "ds.filter(lambda example: example['mean_wc_ocr'] > 0.9)"

One of the particularly nice features here is that the library uses memory mapping to store the dataset under the hood. This means that you can process data that is larger than the RAM you have available on your machine. This can make the process of working with large datasets more accessible. We could also use this as a first step in processing data before getting back to more familiar tools like pandas. 

Image of a line of code: "dogs_data = ds['train'].filter(lamda example: "dog" in example['text'].lower()) df = dogs_data_to_pandas()

In a follow on blog post, we’ll dig into the technical details of datasets in some more detail. Whilst making the technical processing of datasets more accessible is one part of the puzzle, there are also non-technical challenges to making a dataset more usable. 

 

Documenting datasets 

One of the challenges of sharing large datasets is documenting the data effectively. Traditionally libraries have mainly focused on describing material at the ‘item level,’ i.e. documenting one dataset at a time. However, there is a difference between documenting one book and 100,000 books. There are no easy answers to this, but libraries could explore one possible avenue by using Datasheets. Timnit Gebru et al. proposed the idea of Datasheets in ‘Datasheets for Datasets’. A datasheet aims to provide a structured format for describing a dataset. This includes questions like how and why it was constructed, what the data consists of, and how it could potentially be used. Crucially, datasheets also encourage a discussion of the bias and limitations of a dataset. Whilst you can identify some of these limitations by working with the data, there is also a crucial amount of information known by curators of the data that might not be obvious to end-users of the data. Datasheets offer one possible way for libraries to begin more systematically commuting this information. 

The dataset hub adopts the practice of writing datasheets and encourages users of the hub to write a datasheet for their dataset. For the British library books, we have attempted to write one of these datacards. Whilst it is certainly not perfect, it hopefully begins to outline some of the challenges of this dataset and gives end-users a better sense of how they should approach a dataset. 

18 March 2022

Looking back at LibCrowds: surveying our participants

'In the Spotlight' is a crowdsourcing project from the British Library that aims to make digitised historical playbills more discoverable, while also encouraging people to closely engage with this otherwise less accessible collection. Digital Curator Dr Mia Ridge writes...

If you follow our @LibCrowds account on twitter, you might have noticed that we've been working on refreshed versions of our In the Spotlight tasks on Zooniverse. That's part of a small project to enable the use of IIIF manifests on Zooniverse - in everyday language, it means that many, many more digitised items can form the basis of crowdsourcing tasks in the Zooniverse Project Builder, and In the Spotlight is the first project to use this new feature. Along with colleagues in Printed Heritage and BL Labs, I've been looking at our original Pybossa-based LibCrowds site to plan a 'graceful ending' for first phase of the project on LibCrowds.com.

As part of our work documenting and archiving the original LibCrowds site, I'm delighted to share summary results from a 2018 survey of In the Spotlight participants, now published on the British library's Research Repository: https://doi.org/10.23636/w4ee-yc34. Our thanks go to Susan Knight, Customer Insight Coordinator, for her help with the survey.

The survey was designed to help us understand who In the Spotlight participants were, and to help us prioritise work on the project. The 22 question survey was based on earlier surveys run by the Galaxy Zoo and Art UK Tagger projects, to allow comparison with other crowdsourcing projects, and to contribute to our understanding of crowdsourcing in cultural heritage more broadly. It was open to anyone who had contributed to the British Library's In the Spotlight project for historical playbills. The survey was distributed to LibCrowds newsletter subscribers, on the LibCrowds community forum and on social media.

Some headline findings from our survey include:

  • Respondents were most likely to be a woman with a Masters degree, in full-time employment, in London or Southeast UK, who contributes in a break between other tasks or 'whenever they have spare time'.
  • 76% of respondents were motivated by contributing to historical or performance research

Responses to the question 'What was it about this project which caused you to spend more time than intended on it?':

  • Easy to do
  • It's so entertaining
  • Every time an entry is completed you are presented with another item which is interesting and
  • illuminating which provides a continuous temptation regarding what you might discover next
  • simplicity
  • A bit of competitiveness about the top ten contributors but also about contributing something useful
  • I just got carried away with the fun
  • It's so easy to complete
  • Easy to want to do just a few more
  • Addiction
  • Felt I could get through more tasks
  • Just getting engrossed
  • It can be a bit addictive!
  • It's so easy to do that it's very easy to get carried away.
  • interested in the [material]

The summary report contains more rich detail, so go check it out!

 

Crowdsourcing projects from the British Library. 2,969 Volunteers. 265,648 Contributions. 175 Projects
Detail of the front page of libcrowds.com; Crowdsourcing projects from the British Library. 2,969 Volunteers. 265,648 Contributions. 175 Projects

10 March 2022

Scoping the connections between trusted arts and humanities data repositories

CONNECTED: Connecting trusted Arts and Humanities data repositories is a newly funded activity, supported by AHRC. It is led by the British Library, with the Archaeology Data Service and the Oxford Text Archive as co-investigators, and is supported by consultants from MoreBrains Cooperative.The CONNECTED team believes that improving discovery and curation of heritage and emergent content types in the arts and humanities will increase the impact of cultural resources, and enhance equity. Great work is already being done on discovery services for the sector, so we decided to look upstream, and focus on facilitating repository and archive deposit.

The UK boasts a dynamic institutional repository environment in the HE sector, as well as a range of subject- or field-specific repositories. With a distributed repository landscape now firmly established, challenges and inefficiencies still remain that reduce its impact. These include issues around discovery and access, but also questions around interoperability, the relationship of specialised vs general infrastructures, and potential duplication of effort from an author/depositor perspective. Greater coherence and interoperability will effectively unite different trusted repository services to form a resilient distributed data service, which can grow over time as new individual services are required and developed. Alongside the other projects funded as part of ‘Scoping future data services for the arts and humanities’, CONNECTED will help to deliver this unified network.

As practice in the creative arts becomes more digital and the digital humanities continue to thrive, the diversity of ways in which this research is expressed continues to grow. Researchers are increasingly able to combine artefacts, documents, and materials in new and innovative ways; practice-based research in the arts is creating a diverse range of (often complex) outputs, creating new curation and discovery needs; and heritage collections often contain artefacts with large amounts of annotation and commentary amassed over years or centuries, across multiple formats, and with rich contextual information. This expansion is already exposing the limitations of our current information systems, with the potential for vital context and provenance to become invisible. Without additional, careful, future-proofing, the risks of information loss and limits on access will only expand. In addition, metadata creation, deposit, preservation, and discovery strategies should, therefore, be tailored to meet the very different needs of the arts and humanities.

A number of initiatives are aimed at improving interoperability between metadata sources in ways that are more oriented towards the needs of the arts and humanities. Drawing these together with the insights to be gained from the abilities (and limitations) of bibliographic and data-centric metadata and discovery systems, will help to generate robust services in the complex, evolving landscape of arts and humanities research and creation. 

The CONNECTED project will assemble experts, practitioners, and researchers to map current gaps in the content curation and discovery ecosystem and weave together the strengths and potentials of a range of platforms, standards, and technologies in the service of the arts and humanities community. Our activities will run until the end of May, and will comprise three phases:

Phase 1 - Discovery

We will focus on repository or archive deposit as a foundation for the discovery and preservation of diverse outputs, and also as a way to help capture the connections between those objects and the commentary, annotation, and other associated artefacts. 

A data service for the arts and humanities must be developed with researcher needs as a priority, so the project team will engage in a series of semi-structured interviews with a variety of stakeholders including researchers, librarians, curators, and information technologists. The interviews will explore the following ideas:

  • What do researchers need when engaging in discovery of both heritage materials and new outputs?
  • Are there specific needs that relate to different types of content or use-cases? For example, research involving multimedia or structured information processing at scale?
  • What can the current infrastructure support, and where are the gaps between what we have and what we need?
  • What are the feasible technical approaches to transform information discovery?

Phase 2 - Data service programme scoping and planning

The findings from phase 1 will be synthesised using a commercial product strategy approach known as a canvas analysis. Based on the initial impressions from the semi-structured interviews, it is likely that an agile, product, or value proposition canvas will be used to synthesise the findings and structure thinking so that a coherent and robust strategy can be developed. Outputs from the strategy canvas exercise will then be applied to a fully costed and scoped product roadmap and budget for a national data deposit service for the arts and humanities.

Phase 3 - Scoping a unified archiving solution

Building on the partnerships and conversations from the previous phases, the feasibility of a unified ‘deposit switchboard’ will be explored. The purpose of such a switchboard is to enable researchers, curators, and creators to easily deposit items in the most appropriate repository or archive in their field for the object type they are uploading. Using insights gained from the landscaping interviews in phase 1, the team will identify potential pathways to developing a routing service for channelling content to the most appropriate home.

We will conclude with a virtual community workshop to explore the challenges and desirability of the switchboard approach, with a special focus on the benefits this could bring to the uploader of new content and resources.

This is an ambitious project, through which we hope to deliver:

  • A fully costed and scoped technical and organisational roadmap to build the required components and framework for the National Collection
  • Improved usage of resources in the wider GLAM and institutional network, including of course the Archaeology Data Service, The British Library's Shared Research Repository, and the Oxford Text Archive
  • Steps towards a truly community-governed data infrastructure for the arts and humanities as part of the National Collection

As a result of this work, access to UK cultural heritage and outputs will be accelerated and simplified, the impact of the arts and humanities will be enhanced, and we will help the community to consolidate the UK's position as a global leader in digital humanities and infrastructure.

This post is from Rachael Kotarski (@RachPK), Principal Investigator for CONNECTED, and Josh Brown from MoreBrains.

14 February 2022

PhD Placement on Mapping Caribbean Diasporic Networks through Correspondence

Every year the British Library host a range of PhD placement scheme projects. If you are interested in applying for one of these, the 2022 opportunities are advertised here. There are currently 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. If you would like to work with born digital collections, you may want to read last week’s Digital Scholarship blog post about two projects on enhanced curation, hybrid archives and emerging formats. However, if you are interested in Caribbean diasporic networks and want to experiment creating network analysis visualisations, then read on to find out more about the “Mapping Caribbean Diasporic Networks through correspondence (2022-ACQ-CDN)” project.

This is an exciting opportunity to be involved with the preliminary stages of a project to map the Caribbean Diasporic Network evident in the ‘Special Correspondence’ files of the Andrew Salkey Archive. This placement will be based in the Contemporary Literary and Creative Archives team at the British Library with support from Digital Scholarship colleagues. The successful candidate will be given access to a selection of correspondence files to create an item level dataset and explore the content of letters from the likes of Edward Kamau Brathwaite, C.L.R. James, and Samuel Selvon.

Photograph of Andrew Salkey
Photograph of Andrew Salkey, from the Andrew Salkey Archive, Deposit 10310. With kind permission of Jason Salkey.

The main outcome envisaged for this placement is to develop a dataset, using a sample of ten files, linking the data and mapping the correspondent’s names, location they were writing from, and dates of the correspondence in a spreadsheet. The placement student will also learn how to use the Gephi Open Graph Visualisation Platform to create a visual representation of this network, associating individuals with each other and mapping their movement across the world between the 1950s and 1990s.

Gephi is open-source software  for visualising and analysing networks, they provide a step-by-step guide to getting started, with the first step to upload a spreadsheet detailing your ‘nodes’ and ‘edges’. To show an example of how Gephi can be used, We've included an example below, which was created by previous British Library research placement student Sarah FitzGerald from the University of Sussex, using data from the Endangered Archives Programme (EAP) to create a Gephi visualisation of all EAP applications received between 2004 and 2017.

Gephi network visualisation diagram
Network visualisation of EAP Applications created by Sarah FitzGerald

In this visualisation the size of each country relates to the number of applications it features in, as country of archive, country of applicant, or both.  The colours show related groups. Each line shows the direction and frequency of application. The line always travels in a clockwise direction from country of applicant to country of archive, the thicker the line the more applications. Where the country of applicant and country of archive are the same the line becomes a loop. If you want to read more about the other visualisations that Sarah created during her project, please check out these two blog posts:

We hope this new PhD placement will offer the successful candidate the opportunity to develop their specialist knowledge through access to the extensive correspondence series in the Andrew Salkey archive, and to undertake practical research in a curatorial context by improving the accessibility of linked metadata for this collection material. This project is a vital building block in improving the Library’s engagement with this material and exploring the ways it can be accessed by a wider audience.

If you want to apply, details are available on the British Library website at https://www.bl.uk/research-collaboration/doctoral-research/british-library-phd-placement-scheme. Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to [email protected]

This post is by Digital Curator Stella Wisdom (@miss_wisdom) and Eleanor Casson (@EleCasson), Curator in Contemporary Archives and Manuscripts.

26 January 2022

Which Came First: The Author or the Text? Wikidata and the New Media Writing Prize

Congratulations to the 2021 New Media Writing Prize (NMWP) winners, which were announced at a Bournemouth University online event recently: Joannes Truyens and collaborators (Main Prize), Melody MOU (Student Award) and Daria Donina (FIPP Journalism Award 2021). The main prize winner ‘Neurocracy’ is an experimental dystopian narrative that takes place over 10 episodes, through Omnipedia, an imagined future version of Wikipedia in 2049. So this seemed like a very apt jumping off point for today’s blog post, which discusses a recent project where we added NMWP data to Wikidata.

Screen image of Omnipediaan imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens
Omnipedia, an imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens

Note: If you wish to read ‘Neurocracy’ and are prompted for a username and password, use NewMediaWritingPrize1 password N3wMediaWritingPrize!. You can learn more about the work in this article and listen to an interview with the author in this podcast episode.

Working With Wikidata

Dr Martin Poulter describes learning how to work with Wikidata as being like learning a language. When I first heard this description, I didn’t understand: how could something so reliant on raw data be anything like the intricacies of language learning?

It turns out, Martin was completely correct.

Imagine a stack of data as slips of paper. Each slip has an individual piece of data on it: an author’s name, a publication date, a format, a title. How do you start to string this data together so that it makes sense?

One of the beautiful things about Wikidata is that it is both machine and human readable. In order for it to work this way, and for us to upload it effectively, thinking about the relationships between these slips of paper is essential.

In 2021, I had an opportunity to see what Martin was talking about when he spoke about language, as I was asked to work with a set of data about NMWP shortlisted and winning works, which the British Library has collected in the UK Web Archive. You can read more about this special collection here and here

Image of blank post-it notes and a hand with a marker pen preparing to write on one.

About the New Media Writing Prize

The New Media Writing Prize was founded in 2010 to showcase exciting and inventive stories and poetry that integrate a variety of digital formats, platforms, and media. One of the driving forces in setting up and establishing the prize was Chris Meade, director of if:book uk, a ‘think and do tank’ for exploring digital and collaborative possibilities for writers and readers. He was the lead sponsor of the if:book UK New Media Writing Prize, and the Dot Award, which he created in honour of his mother, Dorothy, and he chaired every NMWP awards evening since 2010. Very sadly Chris passed away on 13th January 2022 and the recent 2021 awards event was dedicated to Chris and his family.

Recognising the significance of the NMWP, in recent years the British Library created the New Media Writing Prize Special Collection as part of its emerging formats work. With 11 years of metadata about a born digital collection, this was an ideal data set for me to work with in order to establish a methodology for working with Wikidata uploads in the Library.

Last year I was fortunate to collaborate with Tegan Pyke, a PhD placement student in the Contemporary British Publications Collections team, supervised by Guilia Carla Rossi, Curator for Digital Publications. Tegan's project examined the digital preservation challenges of complex digital objects, developing and testing a quality assurance process for examining works in the NMWP collection. If you want to read more about this project, a report is available here.  For the Wikidata work Tegan and Giulia provided two spreadsheets of data (or slips of paper!), and my aim was to upload linked data that covered the authors, their works, and the award itself - who had been shortlisted, who had won, and when.

Simple, right?

Getting Started

I thought so - until I began to structure my uploads. There were some key questions that needed to be answered about how these relationships would be built, and I needed to start somewhere. Should I upload the authors or the texts first? Should I go through the prize year by year, or be led by other information? And what about texts with multiple authors?

Suddenly it all felt a bit more intimidating!

I was fortunate to attend some Wikidata training run by Wikimedia UK late last year. Martin was our trainer, and one piece of advice he gave us was indispensable: if you’re not sure where to start, literally write it out with pencil and paper. What is the relationship you’re trying to show, in its simplest form? This is where language framing comes in especially useful: thinking about the basic sentence structures I’d learned in high school German became vital.

Image shows four simple sentences: Christine Wilks won NMWP in 2010. Christine Wilks wrote Underbelly. Underbelly won NMWP in 2010. NMWP was won by Christine Wilks in 2010. Christine is circled in green, NMWP in people, and Underbelly in yellow.  QIDs are listed: Q108810306, highlighted in green Q108459688, highlighted in purple Q109237591, highlighted in yellow  Properties are listed: P166, highlighted in blue P800, highlighted in turquoise P585, highlighted in orange
Image by the author, notes own.

The Numbers Bit

You can see from this image how the framework develops: specific items, like nouns, are given identification numbers when they become a Wikidata item. This is their QID. The relationships between QIDs, sort of like the adjectives and verbs, are defined as properties and have P numbers. So Christine Wilks is now Q108810306, and her relationship to her work, Underbelly, or Q109237591, is defined with P800 which means ‘notable work’.

Q108810306 - P800 - Q109237591

You can upload this relationship using the visual editor on Wikidata, by clicking fields and entering data. If you have a large amount of information (remember those slips of paper!) tools like QuickStatements become very useful. Dominic Kane blogged about his experience of this system during his British Library student placement project in 2021.

The intricacies of language are also very important on Wikidata. The nuance and inference we can draw from specific terms is important. The concept of ‘winning’ an award became a subject of semantic debate: the taxonomy of Wikidata advises that we use ‘award received’ in the case of a literary prize, as it’s less of an active sporting competition than something like a marathon or an athletic event.

Asking Questions of the Data

Ultimately we upload information to Wikidata so that it can be queried. Querying uses SPAQRL, a language which allows users to draw information and patterns from vast swathes of data. Querying can be complex: to go back to the language analogy, you have to phrase the query in precisely the right way to get the information you want.

One of the lessons I learned during the NMWP uploads was the importance of a unifying property. Users will likely query this data with a view to surveying results and finding patterns. Each author and work, therefore, needed to be linked to the prize and the collection itself (pictured above). By adding this QID to the property P6379 (‘has works in the collection’), we create a web of data that links every shortlisted author over the 11 year time period.

Getting Started

To have a look at some of the NMWP data, here are some queries I prepared earlier. Please note that data from the 2021 competition has not yet been uploaded!

Authors who won NMWP

Works that won NMWP

Authors nominated for NMWP

Works nominated for NMWP

If you fancy trying some queries but don’t know where to start, I recommend these tutorials:

Tutorials

Resources About SPARQL

This post is by Wikimedian in Residence Dr Lucy Hinnie (@BL_Wikimedian

30 November 2021

BL Labs Online Symposium 2021, Special Climate Change Edition: Speakers Announced!

BL Labs 9th Symposium – Special Climate Change Edition is taking place on Tuesday 7 December 2021. This special event is devoted to looking at computational research and climate change.

A polar bear jumping off an iceberg with the rear of a ship showing. Image captioned: 'A Bear Plunging Into The Sea'
British Library digitised image from page 303 of "A Voyage of Discovery, made under the orders of the Admiralty, in his Majesty's ships Isabella and Alexander for the purpose of exploring Baffin's Bay, and enquiring into the possibility of a North-West Passage".

To help us explore a range of complex issues at the intersection of computational research and climate change we are delighted to announce our expert panel:

  • Schuyler Esprit – Founding Director of Create Caribbean Research Institute & Research Officer at the School of Graduate Studies and Research at the University of West Indies
  • Helen Hardy – Science Digital Programme Manager at the Natural History Museum, London, responsible for mass digitisation of the Museum’s collections of 80 million items
  • Joycelyn Longdon – Founder of ClimateInColour, a platform at the intersection of climate science and social justice, and PhD Student on the Artificial Intelligence for Environmental Risk programme at University of Cambridge
  • Gavin Shaddick – Chair of Data Science and Statistics, University of Exeter, Director of the UKRI funded Centre for Doctoral Training in Environmental Intelligence: Data Science and AI for Sustainable Futures, co-Director of the University of Exeter-Met Office Joint Centre for Excellence in Environmental Intelligence and an Alan Turing Fellow
  • Richard Sandford – Professor of Heritage Evidence, Foresight and Policy at the Institute of Sustainable Heritage at University College London
  • Joseph Walton – Research Fellow in Digital Humanities and Critical and Cultural Theory at the University of Sussex

Join us for this exciting discussion addressing issues such as how digitisation can improve research efficiency, discussing pros and cons of AI and machine learning in relation to climate change, and the links between new technologies, climate and social justice.

You can see more details about our panel and book your place here.

11 November 2021

The British Library Adopts a New Persistent Identifier Policy

Since 29 September, to support and guide the management of its collection, the Library has adopted a new persistent identifier policy. A persistent identifier or PID is a long lasting digital reference to an entity whether it is physical or digital. PIDs are a core component in providing reliable, long-term access to collections and improve their discoverability. They also make it easier to track when and how collections are used. The Library has been using PIDs in various forms for almost a decade but following the creation of a case study as part of the AHRC’s Towards a National Collection funded project, PIDs as IRO Infrastructure, the Library recognised the need to document its rationale and approach to PIDs and lay down principles and requirements for their use.

An image of the world at night from space, showing the bright lights of cities and towns
Photo by NASA on Unsplash

The Library encourages the use of PIDs across its collections and collection metadata. It recognises the role PIDs have as a component in sustainable, open infrastructure and in enabling interoperability and the use of Library resources. PIDs also support the Library’s content strategy and its goal of connecting rather than collecting as they enable long term and reliable access to resources.  

Many different types of PIDs are used across the Library, some of which it creates for itself, e.g. ARKs, and others which it harvests from elsewhere, e.g. DOIs that are used to identify journal articles. While not all existing Library services may meet the requirements described in this policy, it provides a benchmark against which they can be measured and aspire to develop.

To make sure staff at the Library are supported in implementing the policy, a working group has been convened to run until the end of December 2022. This group will raise awareness of the policy and ensure that guidance is made available to any project or service which is under review to consider the use of PIDs.

A public version of the policy is available on this page and an extract with the key points are provided below. The group would like to acknowledge the Bibliothèque nationale de France’s policy which was influential in the creation of this policy.

Principles

In its use of identifiers, the British Library adheres to the following principles, which describe the qualities PIDs created, contributed or consumed by the Library must have.  

  • A PID must never be deleted but may be marked as deprecated if required
  • A PID must be usable in perpetuity to identify its associated entry
  • A PID must only describe one entity and must never be reused for different entities 
  • A PID must have established versioning processes and procedures in place; these may be defined locally by the Library as a creator or by the PID provider  
  • A PID must have established governance mechanisms, such as contracts, in place to ensure the standards of use of the PID are met and continue to be met  
  • A PID must resolve to metadata about the entity available in both a human and machine readable format 
  • A publicly accessible PID must be resolvable via a global resolver
  • A PID must have an operating model that is sustainable for long-term persistent use 

Established user community 

  • A PID must have an established user community, which has adopted it as a standard, either through an organisation such as the International Organization for Standardization (ISO) or as a de factostandard through widespread adoption; the Library will support and develop the use of new types of PIDs where there is a defined and recognised use case which they would address 

Interoperable 

  • A PID must be able to link with the other identifiers in use at the Library through open metadata standards and the capability to cross-reference resources 

New PID types or new use 

  • New types of PIDs should only be considered for use in the Library where there is a defined need which cannot reasonably be met by a combination of PIDs already in use 
  • Any new PID type used by the Library should meet the requirements described in this policy 
  • Where a PID type is emerging and does not have an established community, the Library can seek to influence its development in line with principles for open and sustainable infrastructures 

Requirements

These requirements outline the Library’s responsibilities in using PID services and creating PIDs. While the Library uses identifiers which do not meet all of these requirements, they are included for future work and developments.  

  • The Library aspires to assign PIDs to all resources within its collections, both physical and digital, and associated entities, in alignment with the guiding principles of the Library’s content strategy 2020-2023
  • The Library has varying levels of involvement in different PID schemes, but all PIDs created by the Library must meet the requirements described in this section and the Library prefers the use of PIDs which meet the principles
  • Identifiers created by the Library must have an opaque format, i.e. not contain any semantic information within them, to ensure their longevity 
  • A PID must resolve to information about the entity to which it refers 
  • The Library must have a process to specify the granularity at which PIDs are assigned and how relationships between PIDs for component and overarching entities are managed 
  • The Library must have a process to manage versioning including changes, merges and retirement of entities 
  • Standard descriptive information about an entity, e.g. creator, should have a PID 
  • All metadata associated with a PID should comply with Collection Metadata Licensing Guidelines 
  • Where a PID referring to a citable resource resolves to a webpage, that webpage should display a suggested citation including the hyperlink to the PID to encourage ongoing use of the PID outside the Library

If you would like to hear more about this policy and the Library’s approach to persistent identifiers, feel free to contact the Heritage PIDs project on Twitter or email [email protected].

This post is by Frances Madden (@maddenfc, orcid.org/0000-0002-5432-6116), Research Associate (PIDs as IRO Infrastructure) in the Research Infrastructure Services team.

10 November 2021

BL Labs Online Symposium 2021, Special Climate Change Edition: Book your place for webinar on Tuesday 7 December 2021

In response to the Climate Emergency and issues raised by the COP26, the 9th British Library Labs Symposium is devoted to looking at computational research and climate change.  Registration Now Open.

Futuristic, hologram looking version of the globe overlaid with images like wind turbines, water drops, trees and graphs.

The British Library Labs is the British Library programme dedicated to enabling people to experiment with our digital collections, including deploying computational research methods and using our collections as data. This inevitably means that we, and the communities we work with, are increasingly applying computational tools and methods that have environmental impact on our planet.

As our millions of pages of digitised content are becoming an exciting new research frontier, and we are increasingly using machine learning methods and tools on the large-scale projects, such as the Living with Machines project, it is also inevitable that this exciting new work comes with the increased use of computational resource and energy. With the view of the climate emergency, we are hoping to ensure that climate and sustainability considerations inform everything we do – meaning that we need much better understanding of digital environmental impacts and how this should inform our practice in all things related to computational research.

We know that this is not a simple issue - digitisation and digital preservation is often a lifeline for cultural heritage in the communities where museums, libraries and archives are already endangered due to the climate change - for example, the British Library’s Endangered Archives Programme is dedicated to digitising and saving archives in danger of destruction, including due to climate change. The new digital resources, such the UK Web Archive’s collections, the Climate Change collection in particular, as well as the International Internet Preservation Consortium’s Climate Change collection, are essential resources for climate researchers, especially as we are increasingly working with researchers who wish to text and data mine our collections for the insights that can broaden our understanding of changing climate and biodiversity, and the impact of these changes on different communities.

Equally, as in all other areas related to the impacts of climate change, we are aware that in relation to digital research, there is also a strong interdependency with the issues of equality and social justice. Digital advancements are enablers of new research, helping us to better understand different communities and to broaden access and opportunities, but we also need to consider how the complexities of computational research and access, as well as expensive set up and energy requirements of the state-of-art infrastructures, might disadvantage researchers and communities that do not have access to relevant technologies, or to prohibitively expensive and energy-demanding resources required to run them.

For this year’s BL Labs Symposium, we are bringing a group of speakers that will consider these issues from different angles - from large-scale digitisation, to digital humanities, climate and biodiversity research, as well as the impact of AI. We will look into how our digital strategies and projects can help us fight climate change and be more inclusive, but also how we can improve our sustainability and reduce our impact on the planet.

As well as the views from our panel, there will be an opportunity for an extended audience input, helping us to bring forward the views from the broader Labs community and learn together how our practice can be improved.

The 9th BL Labs Symposium takes place on Zoom on Tuesday 7th December from 16.30 until 18.00. Book your place now.

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs