Digital scholarship blog

Enabling innovative research with British Library digital collections

208 posts categorized "Experiments"

26 July 2024

Charting the European D-SEA Conference at the Stabi

This blog post is by Dr Adi Keinan-Schoonbaert, Digital Curator for Asian and African Collections, British Library. She's on Mastodon as @[email protected]. 

 

Earlier this month, I had the pleasure of attending the “Charting the European D-SEA: Digital Scholarship in East Asian Studies” conference held at the Berlin State Library (Staatsbibliothek zu Berlin), also known as the Stabi. The conference, held on 11-12 July 2024, aimed to fill a gap in the European digital scholarship landscape by creating a research community and a space for knowledge exchange on digital scholarship issues across humanities disciplines concerned with East Asian regions and languages.

The event was a dynamic fusion of workshops, presentations and panel discussions. Over three days of workshops (8-10 July), participants were introduced to key digital methods, resources, and databases. These sessions aimed to transmit practical knowledge in digital scholarship, focusing on East Asian collections and data. The subsequent two days were dedicated to the conference proper, featuring a broad range of presentations on various themes.

The reading room in the Berlin State Library, Haus Potsdamer Straße
The reading room in the Berlin State Library, Haus Potsdamer Straße

 

DH and East Asian Studies in Europe and Beyond

Conference organisers Jing Hu and Brent Ho from the Stabi, and Shih-Pei Chen and Dagmar Schäfer from the Max Planck Institute for the History of Science (MPIWG), set the stage for an enriching exchange of ideas and knowledge. The diversity of topics covered was impressive – from the more established digital resources and research tools to AI applications in historical research – the sessions provided a comprehensive overview of the current state and future directions of the field.

There were so many excellent presentations – and I often wished I could clone myself to attend parallel sessions! As expected, there was much focus on working with AI – machine learning and generative AI – and their potential in historical and humanities research. AI technologies offer powerful tools for data analysis and pattern recognition, and can significantly enhance research capabilities.

Damian Mandzunowski (Heidelberg University) talked about using AI to extract and analyse information from Chinese Comics
Damian Mandzunowski (Heidelberg University) talked about using AI to extract and analyse information from Chinese Comics
 
Shaojian Li (Renmin University of China) looked into automating the classification of pattern images using deep learning
Shaojian Li (Renmin University of China) looked into automating the classification of pattern images using deep learning

One notable session was "Reflections on Deep Learning & Generative AI," chaired by Brent Ho and discussed by Clemens Neudecker. The roundtable highlighted the evolving role of AI in humanities research. Calvin Yeh from MPIWG discussed AI's potential to augment, rather than just automate, research processes. He shared intriguing examples of using AI tools like ChatGPT to simulate group discussions and suggest research actions. Hongsu Wang from Harvard University presented on the use of Large Language Models and traditional Transformers in the China Biographical Database (CBDB) project, demonstrating the effectiveness of these models in data extraction and standardisation.

Calvin Yeh (MPIWG) discussed AI for “Augmentation, not only Automation” and experimented with ChatGPT discussing a research approach, designing a research process and simulating a group discussion
Calvin Yeh (MPIWG) discussed AI for “Augmentation, not only Automation” and experimented with ChatGPT discussing a research approach, designing a research process and simulating a group discussion
 
Hongsu Wang (Harvard University) talked about extracting and standardising data using LLMs and traditional Transformers in the CBDB project – here showcasing Jeffrey Tharsen’s research to create a network graph using a prompt in ChatGPT
Hongsu Wang (Harvard University) talked about extracting and standardising data using LLMs and traditional Transformers in the CBDB project – here showcasing Jeffrey Tharsen’s research to create a network graph using a prompt in ChatGPT

 

Exploring the Stabi

Our group tour in the Stabi was a personal highlight for me. This historic library, part of the Prussian Cultural Heritage Foundation, is renowned for its extensive collections and commitment to making digitised materials publicly accessible. The library operates from two major public sites – Haus Unter Den Linden and Haus Potsdamer Straße. Tours of both locations were available, but I chose to explore the more recent building, designed by Hans Scharoun and located in the Kulturforum on Potsdamer Straße in West Berlin – the history and architecture of which is fascinating.

A group of the conference delegates enjoying the tour of SBB’s Haus Potsdamer Straße
A group of the conference delegates enjoying the tour of SBB’s Haus Potsdamer Straße

I really enjoyed catching up with old colleagues and making new connections with fellow scholars passionate about East Asian digital humanities!

To conclude

In conclusion, the Charting European D-SEA Conference at the Stabi was an enriching experience, providing deep insights into the integration of digital methods in East Asian studies. It provided valuable insights into the advancements in digital scholarship and allowed me to connect with a global community of scholars. The combination of traditional and more recent digital practices, coupled with the forward-looking discussions on AI and deep learning, made this conference a significant milestone in the field. I look forward to seeing how these conversations evolve and contribute to the broader landscape of digital humanities.

 

04 July 2024

DHBN 2024 - Digital Humanities in the Nordic and Baltic Countries Conference Report

This is a joint blog post by Helena Byrne, Curator of Web Archives, Harry Lloyd, Research Software Engineer, and Rossitza Atanassova, Digital Curator.

Conference banner showing Icelandic landscape with mountains
This year’s Digital Humanities in the Nordic and Baltic countries conference took place at the University of Iceland School of Education in Reykjavik. It was the eight conference which was established in 2016, but the first time it was held in Iceland. The theme for the conference was “From Experimentation to Experience: Lessons Learned from the Intersections between Digital Humanities and Cultural Heritage”. There were pre-conference workshops from May 27-29 with the main conference starting on the afternoon of May 29 and finishing on May 31. In her excellent opening keynote Sally Chambers, Head of Research Infrastructure Services at the British Library, discussed the complex research and innovation data space for cultural heritage. Three British Library colleagues report highlights of their conference experience in this blog post.

Helena Byrne, Curator of Web Archives, Contemporary British & Irish Publications.

I presented in the Born Digital session held on May 28. There were four presentations in this session and three were related to web archiving and one related to Twitter (X) data. I co-presented ‘Understanding the Challenges for the Use of Web Archives in Academic Research’. This presentation examined the challenges for the use of web archives in academic research through a synthesis of the findings from two research studies that were published through the WARCnet research network. There was lots of discussion after the presentation on how web archives could be used as a research data management tool to help manage online citations in academic publications.

Helena presenting to an audience during the conference session on born-digital archives
Helena presenting in the born-digital archives session

The conference programme was very strong and there were many takeaways that relate to my role. One strong theme was ‘collections as data’. At the UK Web Archive we have just started to publish some of our inactive curated collections as data. So these discussions were very useful. One highlight was thePanel: Publication and reuse of digital collections: A GLAM Labs approach’. What stood out for me in this session was the checklist for publishing collections as data. It was very reassuring to see that we had pretty much everything covered for the release of the UK Web Archive datasets.

Rossitza and I were kindly offered a tour of the National and University Library of Iceland by Kristinn Sigurðsson, Head of Digital Projects and Development. We enjoyed meeting curatorial staff from the Special Collections who showed us some of the historical maps of Iceland that have been digitised. We also visited the digitisation studio to see how they process periodicals, and spoke to staff involved with web archiving. Thank you to Kristinn and his colleagues for this opportunity to learn about the library’s collections and digital services.

Rossitza and Helena standing by the moat outside the National Library of Iceland building
Rossitza and Helena outside the National and University Library of Iceland

 

Inscription in Icelandic reading National and University Library of Iceland outside the Library building
The National and University Library of Iceland

Harry Lloyd, Research Software Engineer, Digital Research.

DHNB2024 was a rich conference from my perspective as a research software engineer. Sally Chambers’ opening keynote on Wednesday afternoon demonstrated an extraordinary grasp of the landscape of digital cultural heritage across the EU. By this point there had already been a day and a half of workshops, including a session Rossitza and I presented on Catalogues as Data

I spent the first half using a Jupyter notebook to explain how we extracted entries from an OCR’d version of the catalogue of the British Library’s collection of 15th century books. We used an explainable algorithm rather than a ‘black-box’ machine learning one, so we walked through the steps involved and discussed where it worked well and where it could be improved. You can follow along by clicking the ‘launch notebook’ button in the ReadMe here

Harry pointing to an image from the catalogue of printed books on a screen for the workshop audience
Harry explaining text recognition results during the workshop

Handing over to Rossitza in the second half to discuss her corpus linguistic analysis worked really well by giving attendees a feel for the complete workflow. This really showed in some great conversations we had with attendees over the following days about tricky problems like where to store the ‘true’ results of OCR. 

A few highlights from the rest of the conference were Clelia LaMonica’s work using Latin large language model to analyse kinship in texts from Medieval Burgundy. Large language models trained on historic texts are important as the majority are trained on modern material and struggle with historical language. Jørgen Burchardt presented some refreshingly quantitative work on bias across a digitised newspaper collection, very reminiscent of work by Kaspar Beelen. Overall it was a productive few days, and I very much enjoyed my time in Reykjavik.

Rossitza Atanassova, Digital Curator, Digital Research.

This was my second DHNB conference and I was looking forward to reconnecting with the community of researchers and cultural heritage practitioners, some of whom I had met at DHNB2019 in Copenhagen. Apart from the informal discussions with attendees, I contributed to DHNB2024 in two main ways.

As already mentioned, Harry and I delivered a pre-conference workshop showcasing some processes and methodology we use for working with printed catalogues as data. In the session we used the corpus tool AntConc to perform computational analysis of the descriptions for the British Library’s collection of books published in the 15th century. You can find out more about the project here and reuse the workshop materials published on Zenodo here.

I also joined the pre-conference meeting of the international GLAM Labs Community held at the National and University Library of Iceland. This was the first in-person meeting of the community in five years and was a productive session during which we brainstormed ‘100 ideas for the GLAM Labs Community’. Afterwards we had a sneak peak of the archive of the National Theatre of Iceland which is being catalogued and digitised.

The main hall of the Library with a chessboard on a table with two chairs, a statue of a man, holding spectacles and a stained glass screen.
The main hall of the Library.

The DHNB community is so welcoming and supportive, and attracts many early career digital humanists. I was particularly interested to hear from doctoral students researching the use of AI with digitised archives, and using NLP methods with historical collections. One of the projects that stood out for me was Johannes Widegren’s PhD research into the ethical use of AI to enable access and discovery of Sami cultural heritage, and to develop library and archival practice. 

I was also interested in presentations that discussed workflows for creating Named Entity Recognition resources for historical archives and I plan to try out the open-source Label Studio tool that I learned about. And of course, the poster session is always a highlight and I enjoyed finding out about a range of projects, including computational analysis of Scandinavian runic-texts, digital reconstruction of Gothenburg’s 1923 Jubilee exhibition, and training large language models to track semantic variation in climate change vocabulary in Danish news articles.

A line up of people standing in front of a screen advertising the venue for DHNB25 in Estonia
The poster presentations session chaired by Olga Holownia

We are grateful to all DHNB24 organisers for the warm welcome and a great conference experience, with special thanks to the inspirational and indefatigable Olga Holownia

26 June 2024

Join the British Library as a Digital Curator, OCR/HTR

This is a repeated and updated blog post by Dr Adi Keinan-Schoonbaert, Digital Curator for Asian and African Collections. She shares some background information on how a new post advertised for a Digital Curator for OCR/HTR will help the Library streamline post-digitisation work to make its collections even more accessible to users. Our previous run of this recruitment was curtailed due to the cyber-attack on the Library - but we are now ready to restart the process!

 

We’ve been digitising our collections for about three decades, opening up access to incredibly diverse and rich collections, for our users to study and enjoy. However, it is important that we further support discovery and digital research by unlocking the huge potential in automatically transcribing our collections.

We’ve done some work over the years towards making our collection items available in machine-readable format, in order to enable full-text search and analysis. Optical Character Recognition (OCR) technology has been around for a while, and there are several large-scale projects that produced OCRed text alongside digitised images – such as the Microsoft Books project. Until recently, Western languages print collections have been the main focus, especially newspaper collections. A flagship collaboration with the Alan Turing Institute, the Living with Machines project, applied OCR technology to UK newspapers, designing and implementing new methods in data science and artificial intelligence, and analysing these materials at scale.

OCR of Bengali books using Transkribus, Two Centuries of Indian Print Project
OCR of Bengali books using Transkribus, Two Centuries of Indian Print Project

Machine Learning technologies have been dealing increasingly well with both modern and historical collections, whether printed, typewritten or handwritten. Taking a broader perspective on Library collections, we have been exploring opportunities with non-Western collections too. Library staff have been engaging closely with the exploration of OCR and Handwritten Text Recognition (HTR) systems for EnglishBangla, Arabic, Urdu and Chinese. Digital Curators Tom Derrick, Nora McGregor and Adi Keinan-Schoonbaert have teamed up with PRImA Research Lab and the Alan Turing Institute to run four competitions in 2017-2019, inviting providers of text recognition methods to try them out on our historical material.

We have been working with Transkribus as well – for example, Alex Hailey, Curator for Modern Archives and Manuscripts, used the software to automatically transcribe 19th century botanical records from the India Office Records. A digital humanities work strand led by former colleague Tom Derrick saw the OCR of most of our digitised collection of Bengali printed texts, digitised as part of the Two Centuries of Indian Print project. More recently Transkribus has been used to extract text from catalogue cards in a project called Convert-a-Card, as well as from Incunabula print catalogues.

An example of a catalogue card in Transkribus, showing segmentation and transcription
An example of a catalogue card in Transkribus, showing segmentation and transcription

We've also collaborated with Colin Brisson from the READ_Chinese project on Chinese HTR, working with eScriptorium to enhance binarisation, segmentation and transcription models using manuscripts that were digitised as part of the International Dunhuang Programme. You can read more about this work in this brilliant blog post by Peter Smith, who's done a PhD placement with us last year.

The British Library is now looking for someone to join us to further improve the access and usability of our digital collections, by integrating a standardised OCR and HTR production process into our existing workflows, in line with industry best practice.

For more information and to apply please visit the ad for Digital Curator for OCR/HTR on the British Library recruitment site. Applications close on Sunday 21 July 2024. Please pay close attention to questions asked in the application process. Any questions? Drop us a line at [email protected].

Good luck!

07 May 2024

Recovered Pages: Computing for Cultural Heritage Student Projects

The British Library is continuing to recover from last year’s cyber-attack. While our teams work to restore our services safely and securely, one of our goals in the Digital Research Team is to get some of the information from our currently inaccessible web pages into an easily readable and shareable format. We’ll be sharing these pages via blog posts here, with information recovered from the Wayback Machine, a fantastic initiative of the Internet Archive.  

The next page in this series is all about the student projects that came out of our Computing for Cultural Heritage project with the National Archives and Birkbeck University. This student project page was captured by the Wayback Machine on 7 June 2023.  

 

Computing for Cultural Heritage Student Projects

computing for cultural heritage logo - an image of a laptop with bookshelves as the screen saver

This page provides abstracts for a selection of student projects undertaken as part of a one-year part-time Postgraduate Certificate (PGCert), Computing for Cultural Heritage, co-developed by British Library, National Archives and Birkbeck University and funded by the Institute of Coding as part of a £4.8 million University skills drive.

“I have gone from not being able to print 'hello' in Python to writing some relatively complex programs and having a much greater understanding of data science and how it is applicable to my work."

- Jessica Green  

Key points

  • Aim of the trial was to provide professionals working in the cultural heritage sector with an understanding of basic programming and computational analytic tools to support them in their daily work 
  • During the Autumn & Spring terms (October 2019-April 2020), 12 staff members from British Library and 8 staff staff members from The National Archives completed two new trial modules at Birkbeck University: Demystifying computing for heritage professionals and Work-based Project 
  • Birkbeck University have now launched the Applied Data Science (Postgraduate Certificate) based on the outcomes of the trial

Student Projects

 

Transforming Physical Labels into Digital References 

Sotirios Alpanis, British Library
This project aims to use computing to convert data collected during the preparation of archive material for digitisation into a tool that can verify and validate image captures, and subsequently label them. This will take as its input physical information about each document being digitised, perform and facilitate a series of validations throughout image capture and quality assurance and result in an xml file containing a map of physical labels to digital files. The project will take place within the British Library/Qatar Foundation Partnership (BL/QFP), which is digitising archive material for display on the QDL.qa.  

Enhancing national thesis metadata with persistent identifiers

Jenny Basford, British Library 
Working with data from ISNI (International Standard Name Identifier) Agency and EThOS (Electronic Theses Online Service), both based at the British Library, I intend to enhance the metadata of both databases by identifying doctoral supervisors in thesis metadata and matching these data with ISNI holdings. This work will also feed into the European-funded FREYA project, which is concerned with the use of a wide variety of persistent identifiers across the research landscape to improve openness in research culture and infrastructure through Linked Data applications.

A software tool to support the social media activities of the Unlocking Our Sound Heritage Project

Lucia Cavorsi, British Library
Video
I would like to design a software tool able to flag forthcoming anniversaries, by comparing all the dates present in SAMI (sound and moving image catalogue – Sound Archive) with the current date. The aim of this tool is to suggest potential content for the Sound Archive’s social media posts. Useful dates in SAMI which could be matched with the current date and provide material for tweets are: birth and death dates of performers or authors, radio programme broadcast dates, recording dates).  I would like this tool to also match the subjects currently present in SAMI with the subjects featured in the list of anniversaries 2020 which the social media team uses. For example anniversaries like ‘International HIV day’, ‘International day of Lesbian visibility’ etc.  A windows pop up message will be designed for anniversaries notifications on the day.  If time permits, it would be convenient to also analyse what hashtags have been used over last year by the people who are followed by or follow the Sound Archive Twitter account. By extracting a list of these hashtags further, and more sound related, anniversaries could be added to the list of anniversaries currently used by the UOSH’s social media team.

Computing Cholera: Topic modelling the catalogue entries of the General Board of Health

Christopher Day, The National Archives
BlogOther
The correspondence of the General Board of Health (1848–1871) documents the work of a body set up to deal with cholera epidemics in a period where some English homes were so filthy as to be described as “mere pigholes not fit for human beings”. Individual descriptions for each of these over 89,000 letters are available on Discovery, The National Archives (UK)’s catalogue. Now, some 170 years later, access to the letters themselves has been disrupted by another epidemic, COVID-19. This paper examines how data science can be used to repurpose archival catalogue descriptions, initially created to enhance the ‘human findability’ of records (and favoured by many UK archives due to high digitisation costs), for large-scale computational analysis. The records of the General Board will be used as a case study: their catalogue descriptions topic modelled using a latent Dirichlet allocation model, visualised, and analysed – giving an insight into how new sanitary regulations were negotiated with a divided public during an epidemic. The paper then explores the validity of using the descriptions of historical sources as a source in their own right; and asks how, during a time of restricted archival access, metadata can be used to continue research.

An Automated Text Extraction Tool for Use on Digitised Maps

Nicholas Dykes, British Library
Blog / Video
Researchers of history often have difficulty geo-locating historical place names in Africa. I would like to apply automated transcription techniques to a digitised archive of historical maps of Africa to create a resource that will allow users to search for text, and discover where, and on which maps that text can be found. This will enable identification and analysis both of historical place names and of other text, such as topographical descriptions. I propose to develop a software tool in Python that will send images stored locally to the Google Vision API, and retrieve and process a response for each image, consisting of a JSON file containing the text found, pixel coordinate bounding boxes for each instance of text, and a confidence score. The tool will also create a copy of each image with the text instances highlighted. I will experiment with the parameters of the API in order to achieve the most accurate results.  I will incorporate a routine that will store each related JSON file and highlighted image together in a separate folder for each map image, and create an Excel spreadsheet containing text results, confidence scores, links to relevant image folders, and hyperlinks to high-res images hosted on the BL website. The spreadsheet and subfolders will then be packaged together into a single downloadable resource.  The finished software tool will have the capability to create a similar resource of interlinked spreadsheet and subfolders from any batch of images.

Reconstituting a Deconstructed Dataset using Python and SQLite

Alex Green, The National Archives
Video
For this project I will rebuild a database and establish the referential integrity of the data from CSV files using Python and SQLite. To do this I will need to study the data, read the documentation, draw an entity relationship diagram and learn more about relational databases. I want to enable users to query the data as they would have been able to in the past. I will then make the code reusable so it can be used to rebuild other databases, testing it with a further two datasets in CSV form. As an additional challenge, I plan to rearrange the data to meet the principles of ‘tidy data’ to aid data analysis.

PIMMS: Developing a Model Pre-Ingest Metadata Management System at the British Library

Jessica Green, British Library
GitHub / Video
I am proposing a solution to analysing and preparing for ingest a vast amount of ‘legacy’ BL digitised content into the future Digital Asset Management System (DAMPS). This involves building a prototype for a SQL database to aggregate metadata about digitised content and preparing for SIP creation. In addition, I will write basic queries to aid in our ongoing analysis about these TIFF files, including planning for storage, copyright, digital preservation and duplicate analysis. I will use Python to import sample metadata from BL sources like SharePoint, Excel and BL catalogues – currently used for analysis of ‘live’ and ‘legacy’ digitised BL collections. There is at least 1 PB of digitised content on the BL networks alone, as well as on external media such as hard-drives and CDs. We plan to only ingest one copy of each digitised TIFF file set and need to ensure that the metadata is accurate and up-to-date at the point of ingest. This database, the Pre-Ingest Metadata Management System (PIMMS), could serve as a central metadata repository for legacy digitised BL collections until then. I look forward to using Python and SQL, as well as drawing on the coding skills from others, to make these processes more efficient and effective going forward.

Exploring, cleaning and visualising catalogue metadata

Alex Hailey, British Library
Blog / Video
Working with catalogue metadata for the India Office Records (IOR) I will undertake three tasks: 1) converting c430,000 IOR/E index entries to descriptions within the relevant volume entries; 2) producing an SQL database for 46,500 IOR/P descriptions, allowing enhanced search when compared with the BL catalogue; and 3) creating Python scripts for searching, analysis and visualisation, to be demonstrated on dataset(s) and delivered through Jupyter Notebooks.

Automatic generation of unique reference numbers for structured archival data.

Graham Jevon, British Library
Blog / Video / GitHub
The British Library’s Endangered Archives Programme (EAP) funds the digital preservation of endangered archival material around the world. Third party researchers digitise material and send the content to the British Library. This is accompanied by an Excel spreadsheet containing metadata that describes the digitised content. EAP’s main task is to clean, validate, and enhance the metadata prior to ingesting it into the Library’s cataloguing system (IAMS). One of these tasks is the creation of unique catalogue reference numbers for each record (each row of data on the spreadsheet). This is a predominantly manual process that is potentially time consuming and subject to human inputting errors. This project seeks to solve this problem. The intention is to create a Windows executable program that will enable users to upload a csv file, enter a prefix, and then click generate. The instant result will be an export of a new csv file, which contains the data from the original csv file plus automatically generated catalogue reference numbers. These reference numbers are not random. They are structured in accordance with an ordered archival hierarchy. The program will include additional flexibility to account for several variables, including language encoding, computational efficiency, data validation, and wider re-use beyond EAP and the British Library.

Automating Metadata Extraction in Born Digital Processing

Callum McKean, British Library
Video
To automate the metadata extraction section of the Library’s current work-flow for born-digital processing using Python, then interrogate and collate information in new ways using the SQLite module.

Analysis of peak customer interactions with Reference staff at the British Library: a software solution

Jaimee McRoberts, British Library
Video
The British Library, facing on-going budget constraints, has a need to efficiently deploy Reference Services staff during peak periods of demand. The service would benefit from analysis of existing statistical data recording the timestamp of each customer interaction at a Reference Desk. In order to do this, a software solution is required to extract, analyse, and output the necessary data. This project report demonstrates a solution utilising Python alongside the pandas library which has successfully achieved the required data analysis.

Enhancing the data in the Manorial Documents Register (MDR) and making it more accessible

Elisabeth Novitski, The National Archives
Video
To develop computer scripts that will take the data from the existing separate and inconsistently formatted files and merge them into a consistent and organised dataset. This data will be loaded into the Manorial Documents Register (MDR) and National Register of Archives (NRA) to provide the user with improved search ability and access to the manorial document information.

Automating data analysis for collection care research at The National Archives: spectral and textual data

Lucia Pereira Pardo, The National Archives
The day-to-day work of a conservation scientist working for the care of an archival collection involves acquiring experimental data from the varied range of materials present in the physical records (inks, pigments, dyes, binding media, paper, parchment, photographs, textiles, degradation and restoration products, among others). To this end, we use multiple and complementary analytical and testing techniques, such as X-ray fluorescence (XRF), Fourier Transform Infrared (FTIR) and Fibre Optic Reflectance spectroscopies (FORS), multispectral imaging (MSI), colour and gloss measurements, microfading (MFT) and other accelerated ageing tests.  The outcome of these analyses is a heterogeneous and often large dataset, which can be challenging and time-consuming to process and analyse. Therefore, the objective of this project is to automate these tasks when possible, or at least to apply computing techniques to optimise the time and efforts invested in routine operations, so that resources are freed for actual research and more specialised and creative tasks dealing with the interpretation of the results.

Improving efficiencies in content development through batch processing and the automation of workloads

Harriet Roden, British Library
Video
With the purpose to support and enrich the curriculum, the British Library’s Digital Learning team produces large-scale content packages for online learners through individual projects. Due to their reliance on other internal teams within the workflow for content delivery, a substantial amount of resource is spent on routine tasks to duplicate collection metadata across various databases. In order to reduce inefficiencies, increase productivity and improve reliability, my project aimed to alleviate pressures across the workflow through workload automation, through four separate phases.

The Botish Library: building a poetry printing machine with Python

Giulia Carla Rossi, British Library
Blog / Video
This project aims to build a poetry printing machine, as a creative output that unites traditional content, new media and Python. The poems will be sourced from the British Library Digitised Books dataset collection, available under Public Domain Mark; I will sort through the datasets and identify which titles can be categorised as poetry using Python. I will then create a new dataset comprising these poetry books and relative metadata, which will then be connected to the printer with a Python script. The poetry printing machine will print randomized poems from this new dataset, together with some metadata (e.g. poem title, book title, author and shelfmark ID) that will allow users to easily identify the book.

Automating data entry in the UOSH Tracking Database

Chris Weaver, British Library
The proposed software solution is the creation of a Python script (to feature as a module in a larger script) to extract data from a web-based tool (either via obtaining data in JSON format via the sites' API or accessing the database powering the site directly). The data obtained is then formatted and inserted into corresponding fields in a Microsoft SQL Server database.

Final Module

Following the completion of the trial, participants had the opportunity to complete their PGCert in Applied Data Science by attending the final module, Analytic Tools for Information Professionals, which was part of the official course launched last autumn. We followed up with some of the participants to hear more about their experience of the full course:

“The third and final module of the computing for cultural heritage course was not only fascinating and enjoyable, it was also really pertinent to my job and I was immediately able to put the skills I learned into practice.  

The majority of the third module focussed on machine learning. We studied a number of different methods and one of these proved invaluable to the Agents of Enslavement research project I am currently leading. This project included a crowdsourcing task which asked the public to draw rectangles around four different types of newspaper advertisement. The purpose of the task was to use the coordinates of these rectangles to crop the images and create a dataset of adverts that can then be analysed for research purposes. To help ensure that no adverts were missed and to account for individual errors, each image was classified by five different people.  

One of my biggest technical challenges was to find a way of aggregating the rectangles drawn by five different people on a single page in order to calculate the rectangles of best fit. If each person only drew one rectangle, it was relatively easy for me to aggregate the results using the coding skills I had developed in the first two modules. I could simply find the average (or mean) of the five different classification attempts. But what if people identified several adverts and therefore drew multiple rectangles on a single page? For example, what if person one drew a rectangle around only one advert in the top left corner of the page; people two and three drew two rectangles on the same page, one in the top left and one in the top right; and people four and five drew rectangles around four adverts on the same page (one in each corner). How would I be able to create a piece of code that knew how to aggregate the coordinates of all the rectangles drawn in the top left and to separately aggregate the coordinates of all the rectangles drawn in the bottom right, and so on?  

One solution to this problem was to use an unsupervised machine learning method to cluster the coordinates before running the aggregation method. Much to my amazement, this worked perfectly and enabled me to successfully process the total of 92,218 rectangles that were drawn and create an aggregated dataset of more than 25,000 unique newspaper adverts.” 

-Graham Jevon, EAP Cataloguer; BL Endangered Archives Programme 

“The final module of the course was in some ways the most challenging — requiring a lot of us to dust off the statistics and algebra parts of our brain. However, I think, it was also the most powerful; revealing how machine learning approaches can help us to uncover hidden knowledge and patterns in a huge variety of different areas.  

Completing the course during COVID meant that collection access was limited, so I ended up completing a case study examining how generic tropes have evolved in science fiction across time using a dataset extracted from GoodReads. This work proved to be exceptionally useful in helping me to think about how computers understand language differently; and how we can leverage their ability to make statistical inferences in order to support our own, qualitative analyses. 

In my own collection area, working with born digital archives in Contemporary Archives and Manuscripts, we treat draft material — of novels, poems or anything else — as very important to understanding the creative process. I am excited to apply some of these techniques — particularly Unsupervised Machine Learning — to examine the hidden relationships between draft material in some of our creative archives. 

The course has provided many, many avenues of potential enquiry like this and I’m excited to see the projects that its graduates undertake across the Library.” 

- Callum McKean, Lead Curator, Digital; Contemporary British Collection

“I really enjoyed the Analytics Tools for Data Science module. As a data science novice, I came to the course with limited theoretical knowledge of how data science tools could be applied to answer research questions. The choice of using real-life data to solve queries specific to professionals in the cultural heritage sector was really appreciated as it made everyday applications of the tools and code more tangible. I can see now how curators’ expertise and specialised knowledge could be combined with tools for data analysis to further understanding of and meaningful research in their own collection area."

- Giulia Carla Rossi, Curator, Digital Publications; Contemporary British Collection

Please note this page was originally published in Feb 2021 and some of the resources, job titles and locations may now be out of date.

15 March 2024

Call for proposals open for DigiCAM25: Born-Digital Collections, Archives and Memory conference

Digital research in the arts and humanities has traditionally tended to focus on digitised physical objects and archives. However, born-digital cultural materials that originate and circulate across a range of digital formats and platforms are rapidly expanding and increasing in complexity, which raises opportunities and issues for research and archiving communities. Collecting, preserving, accessing and sharing born-digital objects and data presents a range of technical, legal and ethical challenges that, if unaddressed, threaten the archival and research futures of these vital cultural materials and records of the 21st century. Moreover, the environments, contexts and formats through which born-digital records are mediated necessitate reconceptualising the materials and practices we associate with cultural heritage and memory. Research and practitioner communities working with born-digital materials are growing and their interests are varied, from digital cultures and intangible cultural heritage to web archives, electronic literature and social media.

To explore and discuss issues relating to born-digital cultural heritage, the Digital Humanities Research Hub at the School of Advanced Study, University of London, in collaboration with British Library curators, colleagues from Aarhus University and the Endangered Material Knowledge Programme at the British Museum, are currently inviting submissions for the inaugural Born-Digital Collections, Archives and Memory conference, which will be hosted at the University of London and online from 2-4 April 2025. The full call for proposals and submission portal is available at https://easychair.org/cfp/borndigital2025.

Text on image says Born-Digital Collections, Archives and Memory, 2 - 4 April 2025, School of Advanced Study, University of London

This international conference seeks to further an interdisciplinary and cross-sectoral discussion on how the born-digital transforms what and how we research in the humanities. We welcome contributions from researchers and practitioners involved in any way in accessing or developing born-digital collections and archives, and interested in exploring the novel and transformative effects of born-digital cultural heritage. Areas of particular (but not exclusive) interest include:

  1. A broad range of born-digital objects and formats:
    • Web-based and networked heritage, including but not limited to websites, emails, social media platforms/content and other forms of personal communication
    • Software-based heritage, such as video games, mobile applications, computer-based artworks and installations, including approaches to archiving, preserving and understanding their source code
    • Born-digital narrative and artistic forms, such as electronic literature and born-digital art collections
    • Emerging formats and multimodal born-digital cultural heritage
    • Community-led and personal born-digital archives
    • Physical, intangible and digitised cultural heritage that has been remediated in a transformative way in born-digital formats and platforms
  2. Theoretical, methodological and creative approaches to engaging with born-digital collections and archives:
    • Approaches to researching the born-digital mediation of cultural memory
    • Histories and historiographies of born-digital technologies
    • Creative research uses and creative technologist approaches to born-digital materials
    • Experimental research approaches to engaging with born-digital objects, data and collections
    • Methodological reflections on using digital, quantitative and/or qualitative methods with born-digital objects, data and collections
    • Novel approaches to conceptualising born-digital and/or hybrid cultural heritage and archives
  3. Critical approaches to born-digital archiving, curation and preservation:
    • Critical archival studies and librarianship approaches to born-digital collections
    • Preserving and understanding obsolete media formats, including but not limited to CD-ROMs, floppy disks and other forms of optical and magnetic media
    • Preservation challenges associated with the platformisation of digital cultural production
    • Semantic technology, ontologies, metadata standards, markup languages and born-digital curation
    • Ethical approaches to collecting and accessing ‘difficult’ born-digital heritage, such as traumatic or offensive online materials
    • Risks and opportunities of generative AI in the context of born-digital archiving
  4. Access, training and frameworks for born-digital archiving and collecting:
    • Institutional, national and transnational approaches to born-digital archiving and collecting
    • Legal, trustworthy, ethical and environmentally sustainable frameworks for born-digital archiving and collecting, including attention to cybersecurity and safety concerns
    • Access, skills and training for born-digital research and archives
    • Inequalities of access to born-digital collecting and archiving infrastructures, including linguistic, geographic, economic, legal, cultural, technological and institutional barriers

Options for Submissions

A number of different submission types are welcomed and there will be an option for some presentations to be delivered online.

  • Conference papers (150-300 words)
    • Presentations lasting 20 minutes. Papers will be grouped with others on similar subjects or themes to form a complete session. There will be time for questions at the end of each session.
  • Panel sessions (100 word summary plus 150-200 words per paper)
    • Proposals should consist of three or four 20-minute papers. There will be time for questions at the end of each session.
  • Roundtables (200-300 word summary and 75-100 word bio for each speaker)
    • Proposals should include between three to five speakers, inclusive of a moderator, and each session will be no more than 90 minutes.
  • Posters, demos & showcases (100-200 words)
    • These can be traditional printed posters, digital-only posters, digital tool showcases, or software demonstrations. Please indicate the form your presentation will take in your submission.
    • If you propose a technical demonstration of some kind, please include details of technical equipment to be used and the nature of assistance (if any) required. Organisers will be able to provide a limited number of external monitors for digital posters and demonstrations, but participants will be expected to provide any specialist equipment required for their demonstration. Where appropriate, posters and demos may be made available online for virtual attendees to access.
  • Lightning talks (100-200 words)
    • Talks will be no more than 5 minutes and can be used to jump-start a conversation, pitch a new project, find potential collaborations, or try out a new idea. Reports on completed projects would be more appropriately given as 20-minute papers.
  • Workshops (150-300 words)
    • Please include details about the format, length, proposed topic, and intended audience.

Proposals will be reviewed by members of the programme committee. The peer review process will be double-blind, so no names or affiliations should appear on the submissions. The one exception is proposals for roundtable sessions, which should include the names of proposed participants. All authors and reviewers are required to adhere to the conference Code of Conduct.

The submission deadline for proposals is 15 May 2024, has been extended to 7 June 2024, and notification of acceptance is now scheduled for early August 2024. Organisers plan to make a number of bursaries available to presenters to cover the cost of attendance and details about these will be shared when notifications are sent. 

Key Information:

  • Dates: 2 - 4 April 2025
  • Venue: University of London, London, UK & online
  • Call for papers deadline: 7 June 2024
  • Notification of acceptance: early August 2024
  • Submission link: https://easychair.org/cfp/borndigital2025

Further details can be found on the conference website and the call for proposals submission portal at https://easychair.org/cfp/borndigital2025. If you have any questions about the conference, please contact the organising committee at [email protected].

09 October 2023

Strike a Pose Steampunk style! For our Late event with Clockwork Watch on Friday 13th October

This Friday (13th October) the British Library invites you to join the world of Clockwork Watch by Yomi Ayeni, a participatory storytelling project, set in a fantastical retro-futurist vision of Victorian England, with floating cities and sky pirates, which is one of the showcased narratives in our Digital Storytelling exhibition.

Flyer with text saying Late at the Library, Digital Steampunk at the British Library, London. Friday 13 October, 19:30 – 22:30

We are delighted that Dark Box Images will be bringing their portable darkroom to the Late at the Library: Digital Steampunk event and taking portrait photographs. If this appeals to you, then please arrive early to have your picture taken. Photographer Gregg McNeill is an expert in the wet plate collodion process invented by Frederick Scott Archer in 1851. Gregg’s skill in using an authentic Victorian camera creates genuinely remarkable results that appear right in front of your eyes.

Black and white photograph of a woman wearing an elaborate outfit and a mask with her arms outstretched wide with fabric like wings
Wet plate collodion photograph of Jennifer Garside of Wyte Phantom corsetry, taken by Gregg McNeill of Dark Box Images

If you want to pose for the camera at our steampunk Late, or have a portrait drawn by artist Doctor Geof, please don’t be shy, this is an event where guests are encouraged to dress to impress! The aesthetic of steampunk fashion is inspired by Victoriana and 19th Century literature, including Jules Verne’s novels and the Sherlock Holmes stories by Sir Arthur Conan Doyle. Steampunk looks can include hats and googles, tweed tailoring, waistcoats, corsets, fob watches and fans. Whatever your personal style, we encourage you to unleash your creativity when putting together an outfit for this event.

Furthermore, whether you are seeking a new look or some finishing touches, there will be an opportunity to browse a Night Market at this Late event, where you can purchase and admire a range of exquisite hand crafted items created by:

  • Jema Hewitt, a professional costumer and academic, will be bringing some of her unique, handmade jewellery and accessories to the Library Late event. She was one of the originators of the early artistic steampunk scene in the UK, subsequently exhibiting her costume work internationally, and having three how-to-make books published as her alter ego “Emilly Ladybird”. Jema currently specialises as a pattern cutter for film, theatre and TV, as well as lecturing and teaching workshops.
Photograph of jewellery, hats and clothing
Jewellery, hats and clothing created by Jema Hewitt/Emilly Ladybird
  • Doctor Geof, an artist, scientist, comics creator and maker of whimsical objects. His work is often satirical, usually with an historical twist, and features tea, goblins, krakens, steampunk, smut, nuns, bees, cats and more tea. Since 2004 you may have encountered him selling his comics, prints, cards, mugs, pins, and for some reason a lot of embroidered badges (including an Evil Librarian patch!) at various events. As one of the foremost Steampunk artists in the UK, Doctor Geof has worked with and exhibited at the Cutty Sark, Royal Museums Greenwich, and Discovery Museum Newcastle. He is a talented portrait artist, so please seek him out if you would like him to capture your likeness in ink and watercolour.
A round embroidered patch with a cartoon figure wearing goggles and carrying books. Text says "Evil Librarian"
Evil Librarian embroidered patch by Dr Geof

  • Jennifer Garside, a seamstress specialising in modern corsetry, which takes inspiration from historical styles. Her business, Wyte Phantom, opened in 2010, and she has made costumes for opera singers, performers and artists across the world.

  • Tracy Wells, a couture milliner based in the Lake District. She creates all kinds of hats and headpieces, often collaborating with other artists to explore new styles, concepts and genres.
Photograph of a woman wearing a steampunk hat with feathers
Millinery by Tracy Wells
  • Herr Döktor, a renowned inventor, gadgeteer, and contraptionist, who has been working in his Laboratory in the Surrey Hills for the last two decades, building a better future via the prism of history. He will be bringing a small selection of his inventions and scale models of his larger ideas. (His alter ego, Ian Crichton, is a professional model maker with thirty years experience as a toy prototype maker, museum and exhibition designer, and, most recently, building props and models for the film industry, he also lives in the Surrey Hills). 
Photograph of a man wearing a top hat and carrying a model submarine
Herr Döktor, inventor, gadgeteer, and contraptionist. Photograph by Adam Stait
  • Linette Withers established Anachronalia in 2012 to be a full-time bookbinder, producing historically-inspired books, miniature books, and quirky stationery. Her work has been shortlisted for display at the Bodleian Library at the University of Oxford as part of their ‘Redesigning the Medieval Book’ competition and exhibition in 2018 and one of her books is held in the permanent collection of The Lit & Phil in Newcastle after being part of an exhibition of bookbinding in 2021. She also teaches bookbinding in her studio in Leeds.

  • Heather Hayden of Diamante Queen Designs creates handmade vintage inspired, kitsch, macabre, noir accessories for everybody to wear and enjoy. Heather studied fashion and surface pattern design in the 80's near Leeds during the emergence of Gothic culture and has remained interested in the darker side of life ever since. She became fascinated with Steampunk after seeing Datamancer's Steampunk computer, loving the juxtaposition of new and old technology. This inspired her to make steampunk clothing and accessories using old and found items and upcycling as much as possible.
Photograph of a mannequin head wearing a headpiece with tassels, feathers, flowers and beads
Headpiece by Diamante Queen Designs
  • Matthew Chapman of Raphael's Workshop specialises in creating strange and sublime chainmail items, bringing ideas to life in metal that few would ever consider. From collars to corsets, serpents to squids, arms to armour and medals to masterpieces, you should visit his stall and see what creations spark the imagination.
Photograph of a table displaying a range of wearable items of chainmail jewellery and accessories
Chainmail jewellery and accessories created by Raphael's Workshop

We hope that this post has whetted your appetite for the delights available at the Late at the Library: Digital Steampunk event on Friday 13th October at the British Library. Tickets can be booked here.

21 September 2023

Convert-a-Card: Helping Cataloguers Derive Records with OCLC APIs and Python

This blog post is by Harry Lloyd, Research Software Engineer in the Digital Research team, British Library. You can sometimes find him at the Rose and Crown in Kentish Town.

Last week Dr Adi Keinan-Schoonbaert delved into the invaluable work that she and others have done on the Convert-a-Card project since 2015. In this post, I’m going to pick up where she left off, and describe how we’ve been automating parts of the workflow. When I joined the British Library in February, Victoria Morris and former colleague Giorgia Tolfo had prototyped programmatically extracting entities from transcribed catalogue cards and searching by title and author in the OCLC WorldCat database for any close matches. I have been building on this work, and addressing the last yellow rectangle below: “Curator disambiguation and resolution”. Namely how curators choose between OCLC results and develop a MARC record fit for ingest into British Library systems.

A flow chart of the Convert-a-card workflow. Digital catalogue cards to Transkribus to bespoke language model to OCR output (shelfmark, title, author, other text) to OCLC search and retrieval and shelfmark correction to spreadsheet with results to curator disambiguation and resolution to collection metadata ingest
The Convert-a-Card workflow at the start of 2023

 

Entity Extraction

We’re currently working with the digitised images from two drawers of cards, one Urdu and one Chinese. Adi and Giorgia used a layout model on Transkribus to successfully tag different entities on the Urdu cards. The transcribed XML output then had ‘title’, ‘shelfmark’ and ‘author’ tags for the relevant text, making them easy to extract.

On the left an image of an Urdu catalogue card, on the right XML describing the transcribed text, including a "title" tag for the title line
Card with layout model and resulting XML for an Urdu card, showing the `structure {type:title;}` parameter on line one

The same method didn’t work for the Chinese cards, possibly because the cards are less consistently structured. There is, however, consistency in the vertical order of entities on the card: shelfmark comes above title comes above author. This meant I could reuse some code we developed for Rossitza Atanassova’s Incunabula project, which reliably retrieved title and author (and occasionally an ISBN).

Two Chinese cards side-by-side, with different layouts.
Chinese cards. Although the layouts are variable, shelfmark is reliably the first line, with title and author following.

 

Querying OCLC WorldCat

With the title and author for each card, we were set-up to query WorldCat, but how to do this when there are over two thousand cards in these two drawers alone? Victoria and Giorgia made impressive progress combining Python wrappers for the Z39.50 protocol (PyZ3950) and MARC format (Pymarc). With their prototype, a lot of googling of ASN.1, BER and Z39.50, and a couple of quiet weeks drifting through the web of references between the two packages, I built something that could turn a table of titles and authors for the Chinese cards into a list of MARC records. I had also brushed up on enough UTF-8 to fix why none of the Chinese characters were encoded correctly.

For all that I enjoyed trawling through it, Z39.50 is, in the words of a 1999 tutorial, “rather hard to penetrate” and nearly 35 years old. PyZ39.50, the Python wrapper, hasn’t been maintained for two years, and making any changes to the code is a painstaking process. While Z39.50 remains widely used for transferring information between libraries, that doesn’t mean there aren’t better ways of doing things, and in the name of modernity OCLC offer a suite of APIs for their services. Crucially there are endpoints on their Metadata API that allow search and retrieval of records in MARCXML format. As the British Library maintains a cataloguing subscription to OCLC, we have access to the APIs, so all that’s needed is a call to the OCLC OAuth Server, a search on the Metadata API using title and author, then retrieval of the MARCXML for any results. This is very straightforward in Python, and with the Requests package and about ten lines of code we can have our MARCXML matches.

Selecting Matches

At all stages of the project we’ve needed someone to select the best match for a card from WorldCat search results. This responsibility currently lies with curators and cataloguers from the relevant collection area. With that audience in mind, I needed a way to present MARC data from WorldCat so curators could compare the MARC fields for different matches. The solution needed to let a cataloguer choose a card, show the card and a table with the MARC fields for each WorldCat result, and ideally provide filters so curators could use domain knowledge to filter out bad results. I put out a call on the cross-government data science network, and a colleague in the 10DS data science team suggested Streamlit.

Streamlit is a Python package that allows fast development of web apps without needing to be a web app developer (which is handy as I’m not one). Adding Streamlit commands to the script that processes WorldCat MARC records into a dataframe quickly turned it into a functioning web app. The app reads in a dataframe of the cards in one drawer and their potential worldcat matches, and presents it as a table of cards to choose from. You then see the image of the card you’re working on and a MARC field table for the relevant WorldCat matches. This side-by-side view makes it easy to scan across a particular MARC field, and exclude matches that have, for example, the wrong physical dimensions. There’s a filter for cataloguing language, sort options for things like number of subject access fields and total number of fields, and the ability to remove bad matches from view. Once the cataloguer has chosen a match they can save a match to the original dataframe, or note that there were no good matches, or only a partial match.

Screenshot from the Streamlit web app, with an image of a Chinese catalogue card above a table containing MARC data for different WorldCat matches relating to the card.
Screenshot from the Streamlit Convert-a-Card web app, showing the card and the MARC table curators use to choose between matches. As the cataloguers are familiar with MARC, providing the raw fields is the easiest way to choose between matches.

After some very positive initial feedback, we sat down with the Chinese curators and had them test the app out. That led to a fun, interactive, user experience focussed feedback session, and a whole host of GitHub issues on the repository for bugs and design suggestions. Behind the scenes discussion on where to host the app and data are ongoing and not straightforward, but this has been a deeply easy product to prototype, and I’m optimistic it will provide a light weight, gentle learning curve complement to full deriving software like Aleph (the Library’s main cataloguing system).

Next Steps

The project currently uses a range of technologies in  Transkribus, the OCLC APIs, and Streamlit, and tying these together has in itself been a success. Going forwards, we have the possibility of extracting non-English text from the cards to look forward to, and the richer list of entities this would make available. Working with the OCLC APIs has been a learning curve, and they’re not working perfectly yet, but they represent a relatively accessible option compared to Z39.50. And my hope for the Streamlit app is that it will be a useful tool beyond the project for wherever someone wants to use Worldcat to help derive records from minimal information. We still have challenges in terms of design, data storage, and hosting to overcome, but these discussions should have their own benefits in making future development easier. The goal for automation part of the project is a smooth flow of data from Transkribus, through OCLC and on to the curators, and while it’s not perfect, we’re definitely getting there.

14 July 2023

Share Family: British National Bibliography (Beta) service is live

Contents

Introduction

Share Family and National Bibliographies

       What is a National bibliography?

       BNB in the Share Family

Benefits

Future developments

Beta service

Further information

 

Introduction

The British National Bibliography (BNB), first published in January 1950, is a weekly listing of new books and journals published or distributed in the United Kingdom and the Republic of Ireland.  Over the last seventy-three years, the BNB has adapted to changing customer needs by embracing new technologies, from cards in the 1950s to mark-up languages for data exchange in the 1970s and CD-ROM in the 1980s. The BNB now provides online access to details of over 5 million publications and forthcoming titles, ranging in scope from computer science to history, from novels to textbooks.

 

Two examples of bibliographies including information like title, author, place of publication, year, description, prices etc.
1. Examples of British National Bibliography records, April 19th 2023. Please click the image to see it in full size & detail.

In 2011, the Library launched the Linked Open Data BNB.  At that time, linked data was an emerging technology using Web protocols to link data sets, as envisaged in Sir Tim Berners-Lee’s concept of a Semantic Web[1].  Our initial foray into linked data was successful from a technical perspective. We were able to convert BNB data held in Machine Readable Cataloging (MARC) format into linked data structures and make it available in a variety of schemas under an open licence.  Nevertheless, we lacked the capacity to re-model our data in order to realise the potential of linked data.  As the technology matured, we began to look around for partners with whom we could collaborate to take BNB forward.

As described in my September 2020 blogpost, British Library Joins Share-VDE Linked Data Community, the British Library joined the Share Community (now the Share Family) to develop our linked data service. The Share Linked Data Environment is “a global family built on collaboration that brings libraries, archives and museums together with a common goal and joins their knowledge in an ever-widening network of inter-connected bibliographic data.” (Share Family, 2022).

 

Share Family and National Bibliographies

“The Share Family is a suite of innovative tools and services, developed and driven by libraries, for libraries, in an international collaborative, consortial effort. Share-VDE enables the discovery of knowledge to increase user engagement with library and cultural heritage collections.”[2]

Screenshot: Share family components showing layers like Advanced API, Advanced Entity Model, Authority Service, Deliverables etc.
2. Share family components[3]. Please click the image to see it in full size & detail.

The Share Family has supported us through the transition from our traditional MARC data to linked open data.  We provided a full copy of the British National Bibliography to the Share team for identification and clustering of entities, e.g. works, publications, persons. Working with colleagues from other institutions on Share-VDE working groups we contribute to the development of the underlying data structures and the presentation of data.  This collaborative approach has enabled delivery of the British National Bibliography as the first institutional tenant of the Share Family National Bibliographies Portal

What is a National bibliography?

“National bibliographies are a permanent record of the cultural and intellectual output of a nation or country, which is witnessed by its publishing output. They gather the bibliographic information of current publications to preserve and provide ongoing access to this record.”

IFLA Bibliography Section

The IFLA (International Federation of Library Associations and Institutions) Register of national bibliographies contains 52 entries, ranging from Andorra to Vietnam.  National bibliographies vary in scope, but each provides insights into the intellectual and cultural history of society, literature and publishing.  The Share Family National Bibliographies Portal offers the potential for clustering and searching multiple national bibliographies on a single platform.

BNB in the Share Family

Screenshot of the BNB home screen stating 'Search for people, original works and publications
3. Screenshot BNB home screen. Please click the image to see it in full size & detail.

The British Library is proud that the British National Bibliography is the first tenant selected for the Share Family National Bibliographies Portal.

BNB is now available to explore in Beta: https://bl.natbib-lod.org. You can search for publications, original works and people, as illustrated by these examples:

You can use the national bibliography to search for a specific publication, such as a large print edition of the novel Small island by Andrea Levy.

Screenshot: Bibliographic description of large print edition of Small Island by Andrea Levy.
4. Screenshot: Bibliographic description of large print edition of Small Island by Andrea Levy. Please click the image to see it in full size & detail.

 

You can also find original works inspired by earlier works:

Screenshot: Results set for publication of the work, Small island by Helen Edmundson
5. Screenshot: Results set for publication of the work, Small island by Helen Edmundso. Please click the image to see it in full size & detail.

 

Alternatively, you can search for works by a specific author… 

Screenshot showing original works by Douglas Adams
6. Screenshot: Original works by Douglas Adams. Please click the image to see it in full size & detail.

 

…or about a specific person

Screenshot showing original works about Douglas Adams
7. Screenshot: Original works about Douglas Adams. Please click the image to see it in full size & detail.

 

…or by organization

Screenshot showing results set for BBC
8. Screenshot: Results set for BBC. Please click the image to see it in full size & detail.

 

Benefits

What benefit do we expect to gain from this collaboration?

  • We profit from practical experience our collaborators have gained through other linked data initiatives
  • We gain access to a state of the art, extensible infrastructure designed for library data
  • We gain a new channel for dissemination of the BNB, in aggregation with other national bibliographies

We are able to re-tool our metadata for the 21st Century:

  • Our data will be remodelled and clustered making it more compatible with current data models, including the IFLA Library Reference Model, RDA: Resource Description and Access, and Bibframe
  • Our data will be enriched with URIs that will make it more effective in linked data environments
  • The entity-centred view of the British National Bibliography offers new perspectives for researchers

 

Future developments

Conversion of the BNB and publication in the National Bibliographies Portal is only the beginning. 

  • The BNB data from the Cluster Knowledge base will also be published in the triple store
  • Original records will be available to the British Library as Bibframe 2.0, for dissemination or reuse as linked data
  • Users will be provided with access to the data via data dumps and a SPARQL endpoint
  • Our MARC records will be enriched with original Share URIs and URIs from external sources
  • Other national bibliographies will join BNB in the national bibliographies portal

The British National Bibliography represents only a fraction of the Library’s data.   You can explore the British Library’s collection through our catalogue, which we plan to contribute to Share-VDE in future.

 

Beta service

The British National Bibliography in the Share Family is being made available in Beta. The service is still being tested. The interface and the functionality are subject to change and may not work for everyone.  You can tell us what you think about the service or report problems by contacting [email protected].

 

Further information:

British National Bibliography https://bnb.bl.uk  

Share VDE http://www.share-family.org/

Share Family wiki https://wiki.share-vde.org/wiki/Main_Page

Share VDE Virtual Discovery Environment in linked open data https://svde.org/

National Bibliographies in Linked Open Data https://natbib-lod.org

British National Bibliography Linked Open Data Portal https://bl.natbib-lod.org

 

Footnotes

[1]  Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web". Appeared in: Scientific American. (284(5):34-43 (May 2001). 

[2] Share-VDE: supporting the creation, management and discovery of linked open data for libraries: executive summary. Share-VDE Executive Committee. December 7th, 2022. Share-VDE Website (viewed 19th June 2023)

[3] Share Family – Linked data ecosystem. How does it work?  http://www.share-family.org/  (viewed on 23rd June 2023)

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs