Digital scholarship blog

Enabling innovative research with British Library digital collections

Introduction

Tracking exciting developments at the intersection of libraries, scholarship and technology. Read more

27 September 2023

Late at the Library: Digital Steampunk

Summer may be over, but there is much to look forward to this autumn, including our Late at the Library: Digital Steampunk event on Friday 13th October 2023, where we invite you to immerse yourself in the Clockwork Watch story world, party with chap hop maestro Professor Elemental and explore 19th-century London in Minecraft. If these kind of shenanigans sound right up your street, then book tickets here and join us!

Clockwork Watch by Yomi Ayeni is currently showcased in the British Library’s Digital Storytelling exhibition, which is open until 15 October 2023. Set in a retro-futurist steampunk Victorian England, Clockwork Watch is a participatory story that includes multiple voices and perspectives on themes relating to empire, colonialism, exploitation and resistance, which is told across a range of formats, including a series of graphic novels (there is an overview of these titles here), immersive theatre, role play, and an online newspaper the London Gazette.

Drawing of a a range of people in steampunk clothing,in front of a London skyline
Steampunk Illustration by Brett Walsh

For the evening of Friday 13th October, the British Library will transform into the story world of the next part of the Clockwork Watch narrative. Featuring an auction of the last few remaining properties on Peak B, and the opening of bids for Peak C, new housing developments situated on floating islands hovering over the British Channel. Leggett and Scarper, the estate agents managing these properties, will also be inviting inventors or anyone with a solution to problems plaguing these floating islands, to submit their plans for a chance to win a Golden Ticket to one of the new homes on Peak C.

Illustration of Peak B property development on a floating island
© Clockwork Watch / Graham Leggett 2023

Attendees will be able to explore the streets of Sherlock Holmes’ London in Minecraft created by Blockworks and Lancaster University, visit the Night Market, have a photograph taken with authentic Victorian Dark Box photography, or a portrait drawn by artist Dr Geof, and that’s before the auction begins. But be warned, buying your way into this real estate dreamworld is not straightforward – this night is a golden opportunity for the Clockwork Watch underbelly of pickpockets, rogues and vagabonds.

Dressing up and joining in is heartily encouraged. To prepare for this event, we suggest reading the Clockwork Watch graphic novels, you can order these online, or purchase the first two ominbus editions from the British Library’s onsite shop. Also check out the London Gazette website and this special British Library edition of the newspaper. We hope to see you there!

Cover page of the London Gazette British Library edition
© Clockwork Watch

26 September 2023

Let’s learn together - Join us in the Cultural Heritage Open Scholarship Network

Are you working in Galleries-Libraries-Archives-Museums (GLAM) and cultural heritage organisations as research support and research-active staff? Are you interested in developing knowledge and skills in open scholarship? Would you like to establish good practices, share your experience with others and collaborate? If your answer is yes to one or more of these questions, we invite you to join the Cultural Heritage Open Scholarship Network (CHOSN).

Initiated by the British Library’s Research Infrastructure Services built on the experience of and positive responses received from the open scholarship training programme, which was run earlier this year. CHOSN is a community of practice for research support and research-active staff who work in GLAMs, organisations interested in developing and sharing open scholarship knowledge and skills, organising events, and supporting each other in this area. 

GLAMs demonstrate a significant amount of research showcases, but we may find ourselves with inadequate resources to make that research openly available, gain relevant open scholarship skills to make it happen, or even identify what forms research in these environments. CHOSN aims to provide a platform to create synergy for those aiming for good practice in open scholarship.

CHOSN flyer image, text says: Cultural Heritage Open Scholarship Network (CHOSN). Are you working in Galleries-Libraries-Archives-Museums (GLAMs)? Join Us! To develop knowledge and skills in open scholarship, organise activities to learn and grow, and create a community of practise to collaborate and support each other.

This network can be of interest to anyone who is facilitating, enabling, supporting research activities in GLAM organisations. They include but are not limited to research support staff, research-active staff, librarians, curatorial teams, IT specialists, copyright officers and so on. Anyone interested in the areas of open scholarship and works in cultural heritage organisations are welcome.

Join us in the Cultural Heritage Open Scholarship Network (CHOSN) to;

  • explore research activities, roles in GLAMs and make them visible,
  • develop knowledge and skills in open scholarship,
  • carry out capacity development activities to learn and grow, and
  • create a community of practice to collaborate and support each other.

We have set up a JISC mailing list to start communication with the network, you can join by signing up here. We will shortly organise an online meeting to kick off the network plans, explore how to move forward and to collectively discuss what we would like to do next. This will all be communicated via the CHOSN mailing list.

If you have any questions about CHOSN, we are happy to hear from you at [email protected].

21 September 2023

Convert-a-Card: Helping Cataloguers Derive Records with OCLC APIs and Python

This blog post is by Harry Lloyd, Research Software Engineer in the Digital Research team, British Library. You can sometimes find him at the Rose and Crown in Kentish Town.

Last week Dr Adi Keinan-Schoonbaert delved into the invaluable work that she and others have done on the Convert-a-Card project since 2015. In this post, I’m going to pick up where she left off, and describe how we’ve been automating parts of the workflow. When I joined the British Library in February, Victoria Morris and former colleague Giorgia Tolfo had prototyped programmatically extracting entities from transcribed catalogue cards and searching by title and author in the OCLC WorldCat database for any close matches. I have been building on this work, and addressing the last yellow rectangle below: “Curator disambiguation and resolution”. Namely how curators choose between OCLC results and develop a MARC record fit for ingest into British Library systems.

A flow chart of the Convert-a-card workflow. Digital catalogue cards to Transkribus to bespoke language model to OCR output (shelfmark, title, author, other text) to OCLC search and retrieval and shelfmark correction to spreadsheet with results to curator disambiguation and resolution to collection metadata ingest
The Convert-a-Card workflow at the start of 2023

 

Entity Extraction

We’re currently working with the digitised images from two drawers of cards, one Urdu and one Chinese. Adi and Giorgia used a layout model on Transkribus to successfully tag different entities on the Urdu cards. The transcribed XML output then had ‘title’, ‘shelfmark’ and ‘author’ tags for the relevant text, making them easy to extract.

On the left an image of an Urdu catalogue card, on the right XML describing the transcribed text, including a "title" tag for the title line
Card with layout model and resulting XML for an Urdu card, showing the `structure {type:title;}` parameter on line one

The same method didn’t work for the Chinese cards, possibly because the cards are less consistently structured. There is, however, consistency in the vertical order of entities on the card: shelfmark comes above title comes above author. This meant I could reuse some code we developed for Rossitza Atanassova’s Incunabula project, which reliably retrieved title and author (and occasionally an ISBN).

Two Chinese cards side-by-side, with different layouts.
Chinese cards. Although the layouts are variable, shelfmark is reliably the first line, with title and author following.

 

Querying OCLC WorldCat

With the title and author for each card, we were set-up to query WorldCat, but how to do this when there are over two thousand cards in these two drawers alone? Victoria and Giorgia made impressive progress combining Python wrappers for the Z39.50 protocol (PyZ3950) and MARC format (Pymarc). With their prototype, a lot of googling of ASN.1, BER and Z39.50, and a couple of quiet weeks drifting through the web of references between the two packages, I built something that could turn a table of titles and authors for the Chinese cards into a list of MARC records. I had also brushed up on enough UTF-8 to fix why none of the Chinese characters were encoded correctly.

For all that I enjoyed trawling through it, Z39.50 is, in the words of a 1999 tutorial, “rather hard to penetrate” and nearly 35 years old. PyZ39.50, the Python wrapper, hasn’t been maintained for two years, and making any changes to the code is a painstaking process. While Z39.50 remains widely used for transferring information between libraries, that doesn’t mean there aren’t better ways of doing things, and in the name of modernity OCLC offer a suite of APIs for their services. Crucially there are endpoints on their Metadata API that allow search and retrieval of records in MARCXML format. As the British Library maintains a cataloguing subscription to OCLC, we have access to the APIs, so all that’s needed is a call to the OCLC OAuth Server, a search on the Metadata API using title and author, then retrieval of the MARCXML for any results. This is very straightforward in Python, and with the Requests package and about ten lines of code we can have our MARCXML matches.

Selecting Matches

At all stages of the project we’ve needed someone to select the best match for a card from WorldCat search results. This responsibility currently lies with curators and cataloguers from the relevant collection area. With that audience in mind, I needed a way to present MARC data from WorldCat so curators could compare the MARC fields for different matches. The solution needed to let a cataloguer choose a card, show the card and a table with the MARC fields for each WorldCat result, and ideally provide filters so curators could use domain knowledge to filter out bad results. I put out a call on the cross-government data science network, and a colleague in the 10DS data science team suggested Streamlit.

Streamlit is a Python package that allows fast development of web apps without needing to be a web app developer (which is handy as I’m not one). Adding Streamlit commands to the script that processes WorldCat MARC records into a dataframe quickly turned it into a functioning web app. The app reads in a dataframe of the cards in one drawer and their potential worldcat matches, and presents it as a table of cards to choose from. You then see the image of the card you’re working on and a MARC field table for the relevant WorldCat matches. This side-by-side view makes it easy to scan across a particular MARC field, and exclude matches that have, for example, the wrong physical dimensions. There’s a filter for cataloguing language, sort options for things like number of subject access fields and total number of fields, and the ability to remove bad matches from view. Once the cataloguer has chosen a match they can save a match to the original dataframe, or note that there were no good matches, or only a partial match.

Screenshot from the Streamlit web app, with an image of a Chinese catalogue card above a table containing MARC data for different WorldCat matches relating to the card.
Screenshot from the Streamlit Convert-a-Card web app, showing the card and the MARC table curators use to choose between matches. As the cataloguers are familiar with MARC, providing the raw fields is the easiest way to choose between matches.

After some very positive initial feedback, we sat down with the Chinese curators and had them test the app out. That led to a fun, interactive, user experience focussed feedback session, and a whole host of GitHub issues on the repository for bugs and design suggestions. Behind the scenes discussion on where to host the app and data are ongoing and not straightforward, but this has been a deeply easy product to prototype, and I’m optimistic it will provide a light weight, gentle learning curve complement to full deriving software like Aleph (the Library’s main cataloguing system).

Next Steps

The project currently uses a range of technologies in  Transkribus, the OCLC APIs, and Streamlit, and tying these together has in itself been a success. Going forwards, we have the possibility of extracting non-English text from the cards to look forward to, and the richer list of entities this would make available. Working with the OCLC APIs has been a learning curve, and they’re not working perfectly yet, but they represent a relatively accessible option compared to Z39.50. And my hope for the Streamlit app is that it will be a useful tool beyond the project for wherever someone wants to use Worldcat to help derive records from minimal information. We still have challenges in terms of design, data storage, and hosting to overcome, but these discussions should have their own benefits in making future development easier. The goal for automation part of the project is a smooth flow of data from Transkribus, through OCLC and on to the curators, and while it’s not perfect, we’re definitely getting there.

15 September 2023

London Fashion Week SS24: British Library x Ahluwalia

This year we will be continuing our collaboration with the British Fashion Council running our annual student research competition, which encourages fashion students to use the British Library collections in creating their fashion designs. Once again, we will start the collaboration with a fashion show produced by a leading designer. This year we are delighted to be working with Priya Ahluwalia. Earlier this year Priya worked with the Business and IP centre, contributing to the Inspiring Entrepreneurs’ International Women’s Day event, which discussed how we can best embrace and encourage diversity and inclusion in business.

On 15 September during London Fashion Week Priya will showcase her SS24 collection at the British Library. Following the show, Priya will lead this year’s student competition, focusing on the importance of research in design process. As a part of this competition students across the UK will create fashion portfolios inspired by the Library’s unique collections.

The previous collaborations with the British Fashion Council involved a range of exciting designers such as Nabil El Nayal, Phoebe English, Supriya Lele and Charles Jeffrey.

Photo of fashion event with Pheobe English (2021)
Phoebe English’s fashion installation at the British Library in 2021

 

The previous student work utilised the riches of the Library’s digital and physical collections, with the Flickr collection being especially popular with students. However, the inspiration came from many different directions - from art books, photographs and maps to the reading room bags.

This year’s student competition will be launched in October 2023.

Collage of different images of types of coats at the British Library including a man wearing a traditional Romanian winter coat, and a technical image detailing elements of a winter coat
From the winning portfolio of Mihai Popesku, Middlesex University student, who used the Library collections to research traditional Romanian dress

Update: there's been some great coverage in the fashion press (and social media), including this Vogue article that begins 'Priya Ahluwalia’s show purposefully took place at the British Library. More than just a venue, it tied into the theme of her work: bringing forgotten or untold stories about talented people to attention'.

14 September 2023

What's the future of crowdsourcing in cultural heritage?

The short version: crowdsourcing in cultural heritage is an exciting field, rich in opportunities for collaborative, interdisciplinary research and practice. It includes online volunteering, citizen science, citizen history, digital public participation, community co-production, and, increasingly, human computation and other systems that will change how participants relate to digital cultural heritage. New technologies like image labelling, text transcription and natural language processing, plus trends in organisations and societies at large mean constantly changing challenges (and potential). Our white paper is an attempt to make recommendations for funders, organisations and practitioners in the near and distant future. You can let us know what we got right, and what we could improve by commenting on Recommendations, Challenges and Opportunities for the Future of Crowdsourcing in Cultural Heritage: a White Paper.

The longer version: The Collective Wisdom project was funded by an AHRC networking grant to bring experts from the UK and the US together to document the state of the art in designing, managing and integrating crowdsourcing activities, and to look ahead to future challenges and unresolved issues that could be addressed by larger, longer-term collaboration on methods for digitally-enabled participation.

Our open access Collective Wisdom Handbook: perspectives on crowdsourcing in cultural heritage is the first outcome of the project, our expert workshops were a second.

Mia (me) and Sam Blickhan launched our White Paper for comment on pubpub at the Digital Humanities 2023 conference in Graz, Austria, in July this year, with Meghan Ferriter attending remotely. Our short paper abstract and DH2023 slides are online at Zenodo

So - what's the future of crowdsourcing in cultural heritage? Head on over to Recommendations, Challenges and Opportunities for the Future of Crowdsourcing in Cultural Heritage: a White Paper and let us know what you think! You've got until the end of September…

You can also read our earlier post on 'community review' for a sense of the feedback we're after - in short, what resonates, what needs tweaking, what examples could we include?

To whet your appetite, here's a preview of our five recommendations. (To find out why we make those recommendations, you'll have to read the White Paper):

  • Infrastructure: Platforms need sustainability. Funding should not always be tied to novelty, but should also support the maintenance, uptake and reuse of well-used tools.
  • Evidencing and Evaluation: Help create an evaluation toolkit for cultural heritage crowdsourcing projects; provide ‘recipes’ for measuring different kinds of success. Shift thinking about value from output/scale/product to include impact on participants' and community well-being.
  • Skills and Competencies: Help create a self-guided skills inventory assessment resource, tool, or worksheet to support skills assessment, and develop workshops to support their integrity and adoption.
  • Communities of Practice: Fund informal meetups, low-cost conferences, peer review panels, and other opportunities for creating and extending community. They should have an international reach, e.g. beyond the UK-US limitations of the initial Collective Wisdom project funding.
  • Incorporating Emergent Technologies and Methods: Fund educational resources and workshops to help the field understand opportunities, and anticipate the consequences of proposed technologies.

What have we missed? Which points do you want to boost? (For example, we discovered how many of our points apply to digital scholarship projects in general). You can '+1' on points that resonate with you, suggest changes to wording, ask questions, provide examples and references, or (constructively, please) challenge our arguments. Our funding only supported participants from the UK and US, so we're very keen to hear from folk from the rest of the world.

12 September 2023

Convert-a-Card: Past, Present and Future of Catalogue Cards Retroconversion

This blog post is by Dr Adi Keinan-Schoonbaert, Digital Curator for Asian and African Collections, British Library. She's on Mastodon as @[email protected].

 

It’s been more than eight years, in June 2015, since the British Library launched its crowdsourcing platform, LibCrowds, with the aim of enhancing access to our collections. The first project series on LibCrowds was called Convert-a-Card, followed by the ever-so-popular In the Spotlight project. The aim of Convert-a-Card was to convert print card catalogues from the Library’s Asian and African Collections into electronic records, for inclusion in our online catalogue Explore.

A significant portion of the Library's extensive historical collections was acquired well before the advent of standard computer-based cataloguing. Consequently, even though the Library's online catalogue offers public access to tens of millions of records, numerous crucial research materials remain discoverable solely through searching the traditional physical card catalogues. The physical cards provide essential information for each book, such as title, author, physical description (dimensions, number of pages, images, etc.), subject and a “shelfmark” – a reference to the item’s location. This information still constitutes the basic set of data to produce e-records in libraries and archives.

Card Catalogue Cabinets in the British Library’s Asian & African Studies Reading Room © Jon Ellis
Card Catalogue Cabinets in the British Library’s Asian & African Studies Reading Room © Jon Ellis

 

The initial focus of Convert-a-Card was the Library’s card catalogues for Chinese, Indonesian and Urdu books – you can read more about this here and here. Scanned catalogue cards were uploaded to Flickr (and later to our Research Repository), grouped by the physical drawer in which they were originally located. Several of these digitised drawers became projects on LibCrowds.

 

Crowdsourcing Retroconversion

Convert-a-Card on LibCrowds included two tasks:

  1. Task 1 – Search for a WorldCat record match: contributors were asked to look at a digitised card and search the OCLC WorldCat database based on some of the metadata elements printed on it (e.g. title, author, publication date), to see if a record for the book already exists in some form online. If found, they select the matching record.
  2. Task 2 – Transcribe the shelfmark: if a match was found, contributors then transcribed the Library's unique shelfmark as printed on the card.

Online volunteers worked on Pinyin (Chinese), Indonesian and Urdu records, mainly between 2015 and 2019. Their valuable contributions resulted in lists of new records which were then ingested into the Library's Explore catalogue – making these items so much more discoverable to our users. For cards only partially matched with online records, curators and cataloguers had a special area on the LibCrowds platform through which they could address some of the discrepancies in partial matches and resolve them.

An example of an Urdu catalogue card
An example of an Urdu catalogue card

 

After much consideration, we have decided to sunset LibCrowds. However, you can see a good snapshot of it thanks to the UK Web Archive (with thanks to Mia Ridge and Filipe Bento for archiving it), or access its GitHub pages – originally set up and maintained by LibCrowds creator Alex Mendes. We have been using mainly Zooniverse for crowdsourcing projects (see for example Living with Machines projects), and you can see here some references to these and other crowdsourcing initiatives. Sunsetting LibCrowds provided us with the opportunity to rethink Convert-a-Card and consider alternative, innovative ways to automate or semi-automate the retroconversion of these valuable catalogue cards.

 

Text Recognition

As a first step, we were looking to automate the retrieval of text from the digitised cards using OCR/Machine Learning. As mentioned, this text includes shelfmark, title, author, place and date of publication, and other information. If extracted accurately enough, this text could be used for WorldCat lookup, as well as for enhancement of existing records. In most cases, the text was typewritten in English, often with additional information, or translation, handwritten in other languages. To start with, we’ve decided to focus only on the typewritten English – with the aspiration to address other scripts and languages in the future.

Last year, we ran some comparative testing with ABBYY FineReader Server (the software generally used for in-house OCR) and Transkribus, to see how accurately they perform this task. We trialled a set of cards with two different versions of ABBYY, and three different models for typewritten Latin scripts in Transkribus (Model IDs 29418, 36202, and 25849). Assessment was done by visually comparing the original text with the OCRed text, examining mainly the key areas of text which are important for this initiative, i.e. the shelfmark, author’s name and book title. For the purpose of automatically recognising the typewritten English on the catalogue cards, Transkribus Model 29418 performed better than the others – and more accurately than ABBYY’s recognition.

An example of a Pinyin card in Transkribus, showing segmentation and transcription
An example of a Pinyin card in Transkribus, showing segmentation and transcription

 

Using that as a base model, we incrementally trained a bespoke model to recognise the text on our Pinyin cards. We’ve also normalised the resulting text, for example removing spaces in the shelfmark, or excluding unnecessary bits of data. This model currently extracts the English text only, with a Character Error Rate (CER) of 1.8%. With more training data, we plan on extending this model to other types of catalogue cards – but for now we are testing this workflow with our Chinese cards.

 

Entities Extraction

Extracting meaningful entities from the OCRed text is our next step, and there are different ways to do that. One such method – if already using Transkribus for text extraction – is training and applying a bespoke P2PaLA layout analysis model. Such model could identify text regions, improve automated segmentation of the cards, and help retrieve specific regions for further tasks. Former colleague Giorgia Tolfo tested this with our Urdu cards, with good results. Trying to replicate this for our Chinese cards was not as successful – perhaps due to the fact that they are less consistent in structure.

Another possible method is by using regular expressions in a programming language. Research Software Engineer (RSE) Harry Lloyd created a Jupyter notebook with Python code to do just that: take the PAGE XML files produced by Transkribus, parse the XML, and extract the title, author and shelfmark from the text. This works exceptionally well, and in the future we’ll expand entity recognition and extraction to other types of data appearing on the cards. But for now, this information suffices to query OCLC WorldCat and see if a matching record exists.

One of the 26 drawers of Chinese (Pinyin) card catalogues © Jon Ellis
One of the 26 drawers of Chinese (Pinyin) card catalogues © Jon Ellis

 

Matching Cards to WorldCat Records

Entities extracted from the catalogue cards can now be used to search and retrieve potentially matching records from the OCLC WorldCat database. Pulling out WorldCat records matched with our card records would help us create new records to go into our cataloguing system Aleph, as well as enrich existing Aleph records with additional information. Previously done by volunteers, we aim to automate this process as much as possible.

Querying WorldCat was initially done using the z39.50 protocol – the same one originally used in LibCrowds. This is a client-server communications protocol designed to support the search and retrieval of information in a distributed network environment. With an excellent start by Victoria Morris and Giorgia Tolfo, who developed a prototype that uses PyZ3950 and PyMARC to query WorldCat, Harry built upon this, refined the code, and tested it successfully for data search and retrieval. Moving forward, we are likely to use the OCLC API for this – which should be a lot more straightforward!

 

Curator/Cataloguer Disambiguation

Getting potential matches from WorldCat is brilliant, but we would like to have an easy way for curators and cataloguers to make the final decision on the ideal match – which WorldCat record would be the best one as a basis to create a new catalogue record on our system. For this purpose, Harry is currently working on a web application based on Streamlit – an open source Python library that enables the building and sharing of web apps. Staff members will be able to use this app by viewing suggested matches, and selecting the most suitable ones.

I’ll leave it up to Harry to tell you about this work – so stay tuned for a follow-up blog post very soon!

 

11 September 2023

Join the British Library's Universal Viewer Product Team

The British Library has been a leading contributor to IIIF, the International Image Interoperability Framework, and the Universal Viewer for many years. We're about to take the next step in this work - and you can join us! We are recruiting for a Product Owner, a Research Software Engineer and a Senior Test Engineer (deadline 03 January 2024). 

In this post, Dr Mia Ridge, product owner for the Universal Viewer (UV) 2015-18, and Dr Rossitza Atanassova, UV business owner 2019-2023, share some background information on how new posts advertised for a UV product team will help shape the future of the Viewer at the Library and contribute to international work on the UV, IIIF standards and activities.

A lavishly decorated page from a fourteenth century manusript 'The Sherborne Missal' showing an illuminated capital with the Virgin Mary holding baby Jesus and surrounded by the three Kings.With other illuminations in the margins and the text.
Detail from Add MS 74236 'The Sherborne Missal' displayed in the Universal Viewer

 The creation of a Universal Viewer product team is part of wider infrastructure changes at the British Library, and marks a shift from contributing via specific UV development projects to thinking of the Viewer as a product. We'll continue to work with the Open Collective while focusing on Library-specific issues to support other activities across the organisation. 

Staff across the Library have contributed to the development of the Universal Viewer, including curators, digitisation teams and technology staff. Staff engage through bespoke training delivered by the IIIF Consortium, participation at IIIF workshops and conferences and experimentation with new tools, such as the digital storytelling tool Exhibit, to engage wide audiences. Other Library work with IIIF includes a collaboration with Zooniverse to enable items to be imported to Zooniverse via IIIF manifests, making crowdsourcing more accessible to organisations with IIIF items. Most recently with funding from the Andrew W Mellon Foundation we updated the UV to play audio from the British Library sound collections

Over half a million items from the British Library's collections are already available via the Universal Viewer, and that number grows all the time. Work on the UV has already let us retire around 35 other image viewers, a significant reduction in maintenance overheads and creating a more consistent experience for our readers.

However, there's a lot more to do! User expectations change as people use other document and media viewers, whether that's other IIIF tools like Mirador or the latest commercial streaming video platforms. We also need to work on some technical debt, ensure accessibility standards are met, improve infrastructure, and consolidate services for the benefits to users. Future challenges include enhancing UV capabilities to display annotations, formats such as newspapers, and complex objects such as 3D.

A view of the Library's image viewer, showing an early nineteenth century Javanese palm-leaf manuscript inside its decorated wooden covers. To the left of the image there is a list with the thumbnails of the manuscript leaves and to the right the panel displays bibliographic information about the item.
British Library Universal Viewer displaying Add MS 12278

 If you'd like to work in collaboration with an international open source community on a viewer that will reach millions of users around the world, one of these jobs may be for you!

Product Owner (job reference R00000196)

Ensure the strategic vision, development, and success of the project. Your primary goal will be to understand user needs, prioritise features and enhancements, and collaborate with the development team and community to deliver a high-quality open source product. 

Research Software Engineer (job reference R00000197)

Help identify requirements, and design and implement online interfaces to showcase our collections, help answer research questions, and support application of novel methods across team activities.

Senior Test Engineer (job reference R00000198)

Help devise requirements, develop high quality test cases, and support application of novel methods across team activities

To apply please visit the British Library recruitment siteApplications close on 3 January 2024. Interview dates are listed in the job ads.

Please ensure you answer all application questions (CVs cannot be submitted). At the BL we can only shortlist with information that applicants provide in response to questions on the application.  Any questions about the roles or the process? Drop us a line at [email protected].

06 September 2023

Open and Engaged 2023: Community over Commercialisation

The British Library is delighted to host its annual Open and Engaged Conference on Monday 30 October, in-person and online, as part of International Open Access Week.

Open and Engaged 2023: Community over Commercialisation, includes headshots of speakers and lists location as The British Library, London and contact as openaccess@bl.uk

In line with this year’s #OAWeek theme: Open and Engaged 2023: Community over Commercialisation will address approaches and practices to open scholarship that prioritise the best interests of the public and the research community. The programme will focus on community-governance, public-private collaborations, and community building aspects of the topic by keeping the public good in the heart of the talks. It will underline different priorities and approaches for Galleries-Libraries-Archives-Museums (GLAMs) and the cultural sector in the context of open access.

We invite everyone interested in the topic to join us on Monday, 30 October!

This will be a hybrid event taking place at the British Library’s Knowledge Centre in St. Pancras, London, and streamed online for those unable to attend in-person.

You can register for Open and Engaged 2023 by filling this form by Thursday, 26 October 18:00 BST. Please note that the places for in-person attendance are now full and the form is available only for online booking.

Registrants will be contacted with details for either in-person attendance or a link to access the online stream closer to the event.

Programme

Note that clocks change back to GMT in UK on Sunday, 29 October.

9:30     Registration opens for in-person attendees. Entrance Hall at the Knowledge Centre.

10:00   Welcome

10:10   Keynote from Monica Westin, Senior Product Manager at the Internet Archive

Commercial Break: Imagining new ownership models for cultural heritage institutions.

10:40   Session on public-private collaborations for public good chaired by Liz White, Director of Library Partnerships at the British Library.

  • Balancing public-private partnerships with responsibilities to our communities. Mia Ridge, Digital Curator, Western Heritage Collections, The British Library
  • Where do I stand? Deconstructing Digital Collections [Research] Infrastructures: A perspective from Towards a National Collection. Javier Pereda, Senior Researcher of the Towards a National Collection (TaNC)
  • "This is not IP I'm familiar with." The strange afterlife and untapped potential of public domain content in GLAM institutions. Douglas McCarthy, Head of Library Learning Centre, Delft University of Technology.

11:40   Break

12:10   Lightning talks on community projects chaired by Graham Jevon, Digital Service Specialist at the British Library.

  • The Turing Way: Community-led Resources for Open Research and Data Science. Emma Karoune, Senior Research Community Manager, The Alan Turing Institute.
  • Open Online Tools for Creating Interactive Narratives. Giulia Carla Rossi, Curator for Digital Publications and Stella Wisdom, Digital Curator for Contemporary British Collections, The British Library

12:45   Lunch

13:30   Session on the community-centred infrastructure in practice chaired by Jenny Basford, Repository Services Lead at the British Library.

  • AHRC, Digital Research Infrastructure and where we want to go with it. Tao Chang, Associate Director, Infrastructure & Major Programmes, Arts and Humanities Research Council (AHRC)
  • The critical role of repositories in advancing open scholarship. Kathleen Shearer, Executive Director, Confederation of Open Access Repositories (COAR). (Remote talk)
  • Investing in the Future of Open Infrastructure. Kaitlin Thaney, Executive Director, Invest in Open Infrastructure (IOI). (Remote talk)

14:30   Break

15:00   Session on the role of research libraries in prioritizing the community chaired by Ian Cooke, Head of Contemporary British Publications at the British Library.

  • Networks of libraries supporting open access book publishing. Rupert Gatti, Co-founder and the Director of Open Book Publishers, Director of Studies in Economics at the Trinity College Cambridge
  • Collective action for driving open science agenda in Africa and Europe. Iryna Kuchma, Open Access Programme Manager at EIFL. (Remote talk)
  • The Not So Quiet Rights Retention Revolution: Research Libraries, Rights and Supporting our Communities. William Nixon, Deputy Executive Director at RLUK-Research Libraries UK

16:00   Closing remarks

Social media hashtag for the event is #OpenEngaged. If you have any questions, please contact us at [email protected].