Digital scholarship blog

Enabling innovative research with British Library digital collections

131 posts categorized "Tools"

28 February 2023

Legacies of Catalogue Descriptions Project Events at Yale

In January James Baker and I visited the Lewis Walpole Library at Yale, who are the US partner of the Legacies of catalogue descriptions collaboration. The visit had to be postponed several times due to the pandemic, so we were delighted to finally meet in person with Cindy Roman, our counterpart at Yale. The main reason for the trip was to disseminate the findings of our project by running workshops on tools for computational analysis of catalogue data and delivering talks about Researching the Histories of Cataloguing to (Try to) Make Better Metadata. Two of these events were kindly hosted by Kayla Shipp, Programme Manager of the fabulous Franke Family Digital Humanities Lab (DH Lab).

A photo of Cindy Roman, Rossitza Atanassova, James Baker and Kayla Shipp standing in a line in the middle of the Yale Digital Humanities Lab
(left to right) Cindy Roman, Rossitza Atanassova, James Baker and Kayla Shipp in the Yale Digital Humanities Lab

This was my first visit to Yale University campus, so I took the opportunity to explore its iconic library spaces, including the majestic Sterling Memorial Library building, a masterpiece of Gothic Revival architecture, and the world renowned Beinecke Rare Book and Manuscripts Library, whose glass tower inspired the Kings’ Library Tower at the British Library. As well as being amazing hubs for learning and research, the Library buildings and exhibition spaces are also open to public visitors. At the time of my visit I explored the early printed treasures on display at the Beinecke Library, the exhibit about Martin Luther King Jr’s connection with Yale and the splendid display of highlights from Yale’s Slavic collections, including Vladimir Nabokov’s CV for a job application to Yale and a family photo album that belonged to the Romanovs.

A selfie of Rossitza Atanassova with the building of the Stirling Memorial Library in the the background
Outside Yale's Stirling Memorial Library

A real highlight of my visit was the day I spent at the Lewis Walpole Library (LWP), located in Farmington, about 40 miles from the Yale campus. The LWP is a research centre of eighteenth-century studies and an essential resource for the study of Horace Walpole. The collections including important holdings of British prints and drawings were donated to Yale by Wilmarth and Annie Lewis in 1970s, together with several eighteenth-century historic buildings and land.

Prior to my arrival James had conducted archival research with the catalogues of the LWP satirical prints collections, a case study for our project. As well as visiting the modern reading room to take a look at the printed card catalogues many in hand of Mrs Lewis, we were given a tour of Mr and Mrs Lewis’ house which is now used for classes, workshops and meetings. I enjoyed meeting the LWP staff and learned much about the history of the place, the collectors' lives and LWP current initiatives.

One of the historic buildings on the Lewis Walpole Library site - The Roots House, a white Georgian-style building with a terrace, used to house visiting fellows and guests
The Root House which houses residential fellows

 

One of the historic buildings on the Lewis Walpole Library site - a red-coloured building surrounded by trees
Thomas Curricomp House

 

The main house, a white Georgian-style house, seen from the side, with the entrance to the Library on the left
The Cowles House, where Mr and Mrs Lewis lived

 

The two project events I was involved with took place at the Yale DH Lab. During the interactive workshop, Yale Library, faculty and students worked through the training materials on using AntConc for computational analysis and performed a number of tasks with the LWP satirical prints descriptions. There were discussions about the different ways of querying the data and the suitability of this tool for use with non-European languages and scripts. It was great to hear that this approach could prove useful for querying and promoting Yale’s own open access metadata.

 

James talking to a group of people seated at a table, with a screen behind him showing some text data
James presenting at the workshop about AntConc
Rossitza standing next to a screen with a slide about her talk facing the audience
Rossitza presenting her research with incunabula catalogue descriptions

 

The talks addressed the questions around cataloguing labour and curatorial voices, the extent to which computational analysis enables new research questions and can assist practitioners with remedial work involving collections metadata. I spoke about my current RLUK fellowship project with the British Library incunabula descriptions and in particular the history of cataloguing, the process to output text data and some hypotheses to be tested through computational analysis. The following discussion raised questions about the effort that goes into this type of work and the need to balance a greater user access to library and archival collections with the very important considerations about the quality and provenance of metadata.

During my visit I had many interesting conversations with Yale Library staff, Nicole Bouché, Daniel Lovins, Daniel Dollar, and caught up with folks I had met at the 2022 IIIF Conference, Tripp Kirkpatrick, Jon Manton and Emmanuelle Delmas-Glass. I was curious to learn about recent organisational changes aimed to unify the Yale special collections and enhance digital access via IIIF metadata; the new roles of Director of Computational Data and Methods in charge of the DH Lab and Cultural Heritage Data Engineer to transform Yale data into LOUD.

This has been a truly informative and enjoyable visit and my special thanks go to Cindy Roman and Kayla Shipp who hosted my visit and project events at the start of a busy term and to James for the opportunity to work with him on this project.

This blogpost is by Dr Rossitza Atanassova, Digital Curator for Digitisation, British Library. She is on Twitter @RossiAtanassova  and Mastodon @[email protected]

30 November 2022

Skills and Training Needs to Open Heritage Research Through Repositories: Scoping report and Repository Training Programme for cultural heritage professionals

Do you think the repository landscape is mature enough in the heritage sector? Are the policies, infrastructure and skills in place to open up heritage research through digital repositories? Our brief analysis shows that research activity in GLAMs needs better acknowledgement, established digital repositories for dissemination of outputs and empowered staff to make use of repository services. At the British Library, we published a report called Scoping Skills and Developing Training Programme for Managing Repository Services in Cultural Heritage Organisations. We looked at the roles and people involved in the research workflow in GLAMs, and their skills needs to share heritage research openly through digital repositories in order to develop a training program for cultural heritage professionals.

 

Making heritage research openly available

Making research openly available to everyone increases the reach and impact of the work, driving increased value for money in research investment, and helps to make research reusable for everyone. ‘Open’ in this context is not only about making research freely accessible but also about ensuring the research is shared with rich metadata, licensed for reuse, including persistent identifiers, and is discoverable. Communicating research in GLAM contexts goes beyond journal articles. Digital scholarship, practice-based and computational research approaches generate a wide range of complex objects that need to be shared, reused to inform practice, policy and future research, and cannot necessarily be assessed with common metrics and rankings of academia.

The array of research activity in GLAMs needs to be addressed in the context of research repositories. If you look at OpenDOAR and Re3data, the global directories of open repositories, the number of repositories in the cultural heritage sector is still small compared to academic institutions. There is an increasing need to establish repositories for heritage research and to empower cultural heritage professionals to make use of repository services. Staff who are involved in supporting research activities, managing digital collections, and providing research infrastructure in GLAM organisations must be supported with capacity development programmes to establish open scholarship activities and share their research outputs through research repositories.

 

Who is involved in the research activities and repository services?

This question is important considering that staff may not be explicitly research-active, yet research is regularly conducted in addition to day-to-day jobs in GLAMs. In addition, organisations are not primarily driven by a research agenda in the heritage sector. The study we undertook as part of an AHRC funded repository infrastructure project showed us that cultural heritage professionals are challenged by the invisibility of forms of research conducted in their day-to-day jobs as well as lack of dedicated time and staff to work around open scholarship.

In order to bring clarity to the personas involved in research activities and link them to competencies and training needs later on for the purpose of this work, we defined five profiles that carry out and contribute to research in cultural heritage organisations. These five profiles illustrate the researcher as a core player, alongside four other profiles involved in making research happen, and ensuring it can be published, shared, communicated and preserved.

 

A 5 column chart showing 'researchers', 'curators and content creators', 'infomediaries', 'infrastructure architects', and 'policy makers' as the key personas identified.
Figure 1. Profiles identified in the cultural heritage institutions to conduct, facilitate, and support research workflow.

 

 

Consultation on training needs for repository services

We explored the skill gaps and training needs of GLAM professionals from curation to rights management, and open scholarship to management of repository services. In addition to scanning the training landscape for competency frameworks, existing programmes and resources, we conducted interviews to explore training requirements relevant to repository services. Finally, we validated initial findings in a consultative workshop with cultural heritage professionals, to hear their experience and get input to a competency framework and training curriculum.

Interviews highlighted that there is a lack of knowledge and support in cultural heritage organisations, where institutional support and training is not guaranteed for research communication or open scholarship. In terms of types of research activities, the workshop brought interesting discussions about what constitutes ‘research’ in the cultural heritage context and what makes it different to research in a university context. The event underlined the fact that cultural heritage staff profiles for producing, supporting, and communicating the research are different to the higher education landscape at many levels.

 

Discussion board showing virtual post its stuck to a canvas with a river in the background, identifying three key areas: 'What skills and knowledge do we already have?', 'What training elements are required?', and 'What skills and knowledge do we need?' (with the second question acting as a metaphorical bridge over the river).
Figure 2: Discussion board from the Skills and Training Breakout Session in virtual Consultative Workshop held on 28/04/2022.

 

The interviews and the consultative workshop highlighted that the ways of research conducted and communicated in the cultural heritage sector (as opposed to academia) should be taken into account in identifying skills needed and developing training programmes in the areas of open scholarship.

 

Competency framework and curriculum for repository training programme

There is a wealth of information, valuable project outputs, and a number of good analytical works available to identify gaps and gain new skills, particularly in the areas of open science, scholarly communications and research data management. However, adjusting and adopting these works to the context of cultural heritage organisations and relevant professionals will increase their relevance and uptake. Derived from our desk research and workshop analysis, we developed a competency framework that sets out the knowledge and skills required to support open scholarship for the personas present in GLAM organisations. Topic clusters used in the framework are as follows:

  1. Repository Service management
  2. Curation & data stewardship
  3. Metadata management
  4. Preservation
  5. Scholarly publishing
  6. Assessment and impact
  7. Advocacy and communication
  8. Capacity development

The proposed curriculum was designed by considering the pathways to develop, accelerate and manage a repository service. It contains only the areas that we identify as a priority to deliver the most value to cultural heritage organisations. Five teaching modules are considered in this preliminary work: 

  1. Opening up heritage research
  2. Getting started with GLAM repositories
  3. Realising and expanding the benefits
  4. Exploring the scholarly communications ecosystem
  5. Topics for future development

A complete version of the competency framework and the curriculum can be found in the report and is also available as a Google spreadsheet. They will drive increased uptake and use of repositories across AHRC’s investments, increasing value for money from both research funding and infrastructure funding.

 

What is next?

From January to July2023, we, at the British Library, will prepare a core set of materials based on this curriculum and deliver training events in a combination of online and in-person workshops. Training events are being planned to take place in Scotland, North England, Wales in person in addition to several online sessions. Both the framework and the training curriculum will be refined as we receive feedback and input from the participants of these events throughout next year. Event details will be announced in collaboration with host institutions in this blog as well as on our social media channels. Watch this space for more information.

If you have any feedback or questions, please contact us at [email protected].

29 November 2022

My AHRC-RLUK Professional Practice Fellowship: Four months on

In August 2022 I started work on a project to investigate the legacies of curatorial voice in the descriptions of incunabula collections at the British Library and their future reuse. My research is funded by the collaborative AHRC-RLUK Professional Practice Fellowship Scheme for academic and research libraries which launched in 2021. As part of the first cohort of ten Fellows I embraced this opportunity to engage in practitioner research that benefits my institution and the wider sector, and to promote the role of library professionals as important research partners.

The overall aim of my Fellowship is to demonstrate new ways of working with digitised catalogues that would also improve the discoverability and usability of the collections they describe. The focus of my research is the Catalogue of books printed in the 15th century now at the British Museum (or BMC) published between 1908 and 2007 which describes over 12,700 volumes from the British Library incunabula collection. By using computational approaches and tools with the data derived from the catalogue I will gain new insights into and interpretations of this valuable resource and enable its reuse in contemporary online resources. 

Titlepage to volume 2 of the Catalogue of books printed in the fifteenth century now in the British Museum, part 2, Germany, Eltvil-Trier
BMC volume 2 titlepage


This research idea was inspired by a recent collaboration with Dr James Baker, who is also my mentor for this Fellowship, and was further developed in conversations with Dr Karen Limper-Herz, Lead Curator for Incunabula, Adrian Edwards, Head of Printed Heritage Collections, and Alan Danskin, Collections Metadata Standards Manager, who support my research at the Library.

My Fellowship runs until July 2023 with Fridays being my main research days. I began by studying the history of the catalogue, its arrangement and the structure of the item descriptions and their relationship with different online resources. Overall, the main focus of this first phase has been on generating the text data required for the computational analysis and investigations into curatorial and cataloguing practice. This work involved new digitisation of the catalogue and a lot of experimentation using the Transkribus AI-empowered platform that proved best-suited for improving the layout and text recognition for the digitised images. During the last two months I have hugely benefited from the expertise of my colleague Tom Derrick, as we worked together on creating the training data and building structure models for the incunabula catalogue images.

An image from Transkribus Lite showing a page from the catalogue with separate regions drawn around columns 1 and 2, and the text baselines highlighted in purple
Layout recognition output for pages with only two columns, including text baselines, viewed on Transkribus Lite

 

An image from Transkribus Lite showing a page from the catalogue alongside the text lines
Text recognition output after applying the model trained with annotations for 2 columns on the page, viewed on Transkribus Lite

 

An image from Transkribus Lite showing a page from the catalogue with separate regions drawn around 4 columns of text separated by a single text block
Layout recognition output for pages with mixed layout of single text block and text in columns, viewed on Transkribus Lite

Whilst the data preparation phase has taken longer than I had planned due to the varied layout of the catalogue, this has been an important part of the process as the project outcomes are dependent on using the best quality text data for the incunabula descriptions. The next phase of the research will involve the segmentation of the records and extraction of relevant information to use with a range of computational tools. I will report on the progress with this work and the next steps early next year. Watch this space and do get in touch if you would like to learn more about my research.

This blogpost is by Dr Rossitza Atanassova, Digital Curator for Digitisation, British Library. She is on Twitter @RossiAtanassova  and Mastodon @[email protected]

05 August 2022

Burmese Script Conversion using Aksharamukha

This blog post is by Dr Adi Keinan-Schoonbaert, Digital Curator for Asian and African Collections, British Library. She's on Twitter as @BL_AdiKS.

 

Curious about Myanmar (Burma)? Did you know that the British Library has a large collection of Burmese materials, including manuscripts dating back to the 17th century, early printed books, newspapers, periodicals, as well as current material?

You can search our main online catalogue Explore the British Library for printed material, or the Explore Archives and Manuscripts catalogue for manuscripts. But, to increase chances of discovering printed resources, you will need to search the Explore catalogue by typing in the transliteration of the Burmese title and/or author using the Library of Congress romanisation rules. This means that searching for an item using the original Burmese script, or using what you would intuitively consider to be the romanised version of Burmese script, is not going to get you very far (not yet, anyway).

Excerpt from the Library of Congress romanisation scheme
Excerpt from the Library of Congress romanisation scheme

 

The reason for this is that this is how we catalogue Burmese collection items at the Library, following a policy to transliterate Burmese using the Library of Congress (LoC) rules. In theory, the benefit of this system specifically for Burmese is that it enables a two-way transliteration, i.e. the romanisation could be precisely reversed to give the Burmese script. However, a major issue arises from this romanisation system: romanised versions of Burmese script are so far removed from their phonetic renderings, that most Burmese speakers are completely unable to recognise any Burmese words.

With the LoC scheme being unintuitive for Burmese speakers, not reflecting the spoken language, British Library catalogue records for Burmese printed materials end up virtually inaccessible to users. And we’re not alone with this problem – other libraries worldwide holding Burmese collections and using the LoC romanisation scheme, face the same issues.

The Buddha at Vesali in a Burmese manuscript, from the Henry Burney collection. British Library, Or. 14298, f. 1
The Buddha at Vesali in a Burmese manuscript, from the Henry Burney collection. British Library, Or. 14298, f. 1

 

One useful solution to this could be to find or develop a tool that converts the LoC romanisation output into Burmese script, and vice versa – similar to how you would use Google Translate. Maria Kekki, our Curator for Burmese collections, have discovered the online tool Aksharamukha, which aims to facilitate conversion between various scripts – also referred to as transliteration (transliteration into Roman alphabet is particularly referred to as romanisation). It supports 120 scripts and 21 romanisation methods, and luckily, Burmese is one of them.

Aksharamukha: Script Converter screenshot
Aksharamukha: Script Converter screenshot

 

Using Aksharamukha has already been of great help to Maria. Instead of painstakingly converting Burmese script manually into its romanised version, she could now copy-paste the conversion and make any necessary adjustments. She also noticed making fewer errors this way! However, it was missing one important thing – the ability to directly transliterate Burmese script specifically using the LoC romanisation system.

Such functionality would not only save our curatorial and acquisitions staff a significant amount of time – but also help any other libraries holding Burmese collections and following the LoC guidelines. This would also allow Burmese speakers to find material in the library catalogue much more easily – readers will also use this platform to find items in our collection, as well as other collections around the world.

To this end, Maria got in touch with the developer of Aksharamukha, Vinodh Rajan – a computer scientist who is also an expert in writing systems, languages and digital humanities. Vinodh was happy to implement two things: (1) add the LoC romanisation scheme as one of the transliteration options, and (2) add spaces in between words (when it comes to spacing, according to the LoC romanisation system, there are different rules for words of Pali and English origin, which are written together).

Vinodh demonstrating the new Aksharamukha functionality, June 2022
Vinodh demonstrating the new Aksharamukha functionality, June 2022

 

Last month (July 2022) Vinodh implemented the new system, and what we can say, the result is just fantastic! Readers are now able to copy-paste transliterated text into the Library’s catalogue search box, to see if we hold items of interest. It is also a significant improvement for cataloguing and acquisition processes, being able to create acquisitions records and minimal records. As a next step, we will look into updating all of our Burmese catalogue records to include Burmese script (alongside transliteration), and consider a similar course of action for other South or Southeast Asian scripts.

I should mention that as a bonus, Aksharamukha’s codebase is fully open source, is available on GitHub and is well documented. If you have feedback or notice any bugs, please feel free to raise an issue on GitHub. Thank you, Vinodh, for making this happen!

 

27 June 2022

IIIF-yeah! Annual Conference 2022

At the beginning of June Neil Fitzgerald, Head of Digital Research, and myself attended the annual International Image Interoperability Framework (IIIF) Showcase and Conference in Cambridge MA. The showcase was held in Massachusetts’s Institute of Technology’s iconic lecture theatre 10-250 and the conference was held in the Fong Auditorium of Boylston Hall on Harvard’s campus. There was a stillness on the MIT campus, in contrast Harvard Yard was busy with sightseeing members of the public and the dismantling of marquees from the end of year commencements in the previous weeks. 

View of the Massachusetts Institute of Technology Dome IIIF Consortium sticker reading IIIF-yeah! Conference participants outside Boylston Hall, Harvard Yard


The conference atmosphere was energising, with participants excited to be back at an in-person event, the last one being held in 2019 in Göttingen, with virtual meetings held in the meantime. During the last decade IIIF has been growing as reflected by the fast expanding community  and IIIF Consortium, which now comprises 63 organisations from across the GLAM and commercial sectors. 

The Showcase on June 6th was an opportunity to welcome those new to IIIF and highlight recent community developments. I had the pleasure of presenting the work of British Library and Zooninverse to enable new IIIF functionality on Zooniverse to support our In the Spotlight project which crowdsources information about the Library’s historical playbills collection. Other presentations covered the use of IIIF with audio, maps, and in teaching, learning and museum contexts, and the exciting plans to extend IIIF standards for 3D data. Harvard University updated on their efforts to adopt IIIF across the organisation and their IIIF resources webpage is a useful resource. I was particularly impressed by the Leventhal Map and Education Center’s digital maps initiatives, including their collaboration on Allmaps, a set of open source tools for curating, georeferencing and exploring IIIF maps (learn more).

 The following two days were packed with brilliant presentations on IIIF infrastructure, collections enrichment, IIIF resources discovery, IIIF-enabled digital humanities teaching and research, improving user experience and more. Digirati presented a new IIIF manifest editor which is being further developed to support various use cases. Ed Silverton reported on the newest features for the Exhibit tool which we at the British Library have started using to share engaging stories about our IIIF collections.

 Ed Silverton presenting a slide about the Exhibit tool Conference presenters talking about the Audiovisual Metadata Platform Conference reception under a marquee in Harvard Yard

I was interested to hear about Getty’s vision of IIIF as enabling technology, how it fits within their shared data infrastructure and their multiple use cases, including to drive image backgrounds based on colour palette annotations and the Quire publication process. It was great to hear how IIIF has been used in digital humanities research, as in the Mapping Colour in History project at Harvard which enables historical analysis of artworks though pigment data annotations, or how IIIF helps to solve some of the challenges of remote resources aggregation for the Paul Laurence Dunbar initiative.

There was also much excitement about the Detekiiif browser extension for Chrome and Firefox that detects IIIF resources in websites and helps collect and export IIIF manifests. Zentralbibliothek Zürich’s customised version ZB-detektIIIF allows scholars to create IIIF collections in JSON-LD and link to the Mirador Viewer. There were several great presentations about IIIF players and tools for audio-visual content, such as Avalon, Aviary, Clover, Audiovisual Metadata Platform and Mirador video extension. And no IIIF Conference is ever complete without a #FunWithIIIF presentation by Cogapp’s Tristan Roddis this one capturing 30 cool projects using IIIF content and technology! 

We all enjoyed lots of good conversations during the breaks and social events, and some great tours were on offer. Personally I chose to visit the Boston Public Library’s Leventhal Map and Education Centre and exhibition about environment and social justice, and BPL Digitisation studio, the latter equipped with the Internet Archive scanning stations and an impressive maps photography room.

Boston Public Library book trolleys Boston Public Library Maps Digitisation Studio Rossitza Atanassova outside Boston Pubic Library


I was also delighted to pay a visit to the Harvard Libraries digitisation team who generously showed me their imaging stations and range of digitised collections, followed by a private guided tour of the Houghton Library’s special collections and beautiful spaces. Huge thanks to all the conference organisers, the local committee, and the hosts for my visits, Christine Jacobson, Bill Comstock and David Remington. I learned a lot and had an amazing time. 

Finally, all presentations from the three days have been shared and some highlights captured on Twitter #iiif. In addition this week the Consortium is offering four free online workshops to share IIIF best practices and tools with the wider community. Don’t miss your chance to attend. 

This post is by Digital Curator Rossitza Atanassova (@RossiAtanassova)

20 April 2022

Importing images into Zooniverse with a IIIF manifest: introducing an experimental feature

Digital Curator Dr Mia Ridge shares news from a collaboration between the British Library and Zooniverse that means you can more easily create crowdsourcing projects with cultural heritage collections. There's a related blog post on Zooniverse, Fun with IIIF.

IIIF manifests - text files that tell software how to display images, sound or video files alongside metadata and other information about them - might not sound exciting, but by linking to them, you can view and annotate collections from around the world. The IIIF (International Image Interoperability Framework) standard makes images (or audio, video or 3D files) more re-usable - they can be displayed on another site alongside the original metadata and information provided by the source institution. If an institution updates a manifest - perhaps adding information from updated cataloguing or crowdsourcing - any sites that display that image automatically gets the updated metadata.

Playbill showing the title after other large text
Playbill showing the title after other large text

We've posted before about how we used IIIF manifests as the basis for our In the Spotlight crowdsourced tasks on LibCrowds.com. Playbills are great candidates for crowdsourcing because they are hard to transcribe automatically, and the layout and information present varies a lot. Using IIIF meant that we could access images of playbills directly from the British Library servers without needing server space and extra processing to make local copies. You didn't need technical knowledge to copy a manifest address and add a new volume of playbills to In the Spotlight. This worked well for a couple of years, but over time we'd found it difficult to maintain bespoke software for LibCrowds.

When we started looking for alternatives, the Zooniverse platform was an obvious option. Zooniverse hosts dozens of historical or cultural heritage projects, and hundreds of citizen science projects. It has millions of volunteers, and a 'project builder' that means anyone can create a crowdsourcing project - for free! We'd already started using Zooniverse for other Library crowdsourcing projects such as Living with Machines, which showed us how powerful the platform can be for reaching potential volunteers. 

But that experience also showed us how complicated the process of getting images and metadata onto Zooniverse could be. Using Zooniverse for volumes of playbills for In the Spotlight would require some specialist knowledge. We'd need to download images from our servers, resize them, generate a 'manifest' list of images and metadata, then upload it all to Zooniverse; and repeat that for each of the dozens of volumes of digitised playbills.

Fast forward to summer 2021, when we had the opportunity to put a small amount of funding into some development work by Zooniverse. I'd already collaborated with Sam Blickhan at Zooniverse on the Collective Wisdom project, so it was easy to drop her a line and ask if they had any plans or interest in supporting IIIF. It turns out they had, but hadn't had the resources or an interested organisation necessary before.

We came up with a brief outline of what the work needed to do, taking the ability to recreate some of the functionality of In the Spotlight on Zooniverse as a goal. Therefore, 'the ability to add subject sets via IIIF manifest links' was key. ('Subject set' is Zooniverse-speak for 'set of images or other media' that are the basis of crowdsourcing tasks.) And of course we wanted the ability to set up some crowdsourcing tasks with those items… The Zooniverse developer, Jim O'Donnell, shared his work in progress on GitHub, and I was very easily able to set up a test project and ask people to help create sample data for further testing. 

If you have a Zooniverse project and a IIIF address to hand, you can try out the import for yourself: add 'subject-sets/iiif?env=production' to your project builder URL. e.g. if your project is number #xxx then the URL to access the IIIF manifest import would be https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production

Paste a manifest URL into the box. The platform parses the file to present a list of metadata fields, which you can flag as hidden or visible in the subject viewer (public task interface). When you're happy, you can click a button to upload the manifest as a new subject set (like a folder of items), and your images are imported. (Don't worry if it says '0 subjects).

 

Screenshot of manifest import screen
Screenshot of manifest import screen

You can try out our live task and help create real data for testing ingest processes at ​​https://frontend.preview.zooniverse.org/projects/bldigital/in-the-spotlight/classify

This is a very brief introduction, with more to come on managing data exports and IIIF annotations once you've set up, tested and launched a crowdsourced workflow (task). We'd love to hear from you - how might this be useful? What issues do you foresee? How might you want to expand or build on this functionality? Email [email protected] or tweet @mia_out @LibCrowds. You can also comment on GitHub https://github.com/zooniverse/Panoptes-Front-End/pull/6095 or https://github.com/zooniverse/iiif-annotations

Digital work in libraries is always collaborative, so I'd like to thank British Library colleagues in Finance, Procurement, Technology, Collection Metadata Services and various Collections departments; the Zooniverse volunteers who helped test our first task and of course the Zooniverse team, especially Sam, Jim and Chris for their work on this.

 

14 March 2022

The Lotus Sutra Manuscripts Digitisation Project: the collaborative work between the Heritage Made Digital team and the International Dunhuang Project team

Digitisation has become one of the key tasks for the curatorial roles within the British Library. This is supported by two main pillars: the accessibility of the collection items to everybody around the world and the preservation of unique and sometimes, very fragile, items. Digitisation involves many different teams and workflow stages including retrieval, conservation, curatorial management, copyright assessment, imaging, workflow management, quality control, and the final publication to online platforms.

The Heritage Made Digital (HMD) team works across the Library to assist with digitisation projects. An excellent example of the collaborative nature of the relationship between the HMD and International Dunhuang Project (IDP) teams is the quality control (QC) of the Lotus Sutra Project’s digital files. It is crucial that images meet the quality standards of the digital process. As a Digitisation Officer in HMD, I am in charge of QC for the Lotus Sutra Manuscripts Digitisation Project, which is currently conserving and digitising nearly 800 Chinese Lotus Sutra manuscripts to make them freely available on the IDP website. The manuscripts were acquired by Sir Aurel Stein after they were discovered  in a hidden cave in Dunhuang, China in 1900. They are thought to have been sealed there at the beginning of the 11th century. They are now part of the Stein Collection at the British Library and, together with the international partners of the IDP, we are working to make them available digitally.

The majority of the Lotus Sutra manuscripts are scrolls and, after they have been treated by our dedicated Digitisation Conservators, our expert Senior Imaging Technician Isabelle does an outstanding job of imaging the fragile manuscripts. My job is then to prepare the images for publication online. This includes checking that they have the correct technical metadata such as image resolution and colour profile, are an accurate visual representation of the physical object and that the text can be clearly read and interpreted by researchers. After nearly 1000 years in a cave, it would be a shame to make the manuscripts accessible to the public for the first time only to be obscured by a blurry image or a wayward piece of fluff!

With the scrolls measuring up to 13 metres long, most are too long to be imaged in one go. They are instead shot in individual panels, which our Senior Imaging Technicians digitally “stitch” together to form one big image. This gives online viewers a sense of the physical scroll as a whole, in a way that would not be possible in real life for those scrolls that are more than two panels in length unless you have a really big table and a lot of specially trained people to help you roll it out. 

Photo showing the three individual panels of Or.8210S/1530R with breaks in between
Or.8210/S.1530: individual panels
Photo showing the three panels of Or.8210S/1530R as one continuous image
Or.8210/S.1530: stitched image

 

This post-processing can create issues, however. Sometimes an error in the stitching process can cause a scroll to appear warped or wonky. In the stitched image for Or.8210/S.6711, the ruled lines across the top of the scroll appeared wavy and misaligned. But when I compared this with the images of the individual panels, I could see that the lines on the scroll itself were straight and unbroken. It is important that the digital images faithfully represent the physical object as far as possible; we don’t want anyone thinking these flaws are in the physical item and writing a research paper about ‘Wonky lines on Buddhist Lotus Sutra scrolls in the British Library’. Therefore, I asked the Senior Imaging Technician to restitch the images together: no more wonky lines. However, we accept that the stitched images cannot be completely accurate digital surrogates, as they are created by the Imaging Technician to represent the item as it would be seen if it were to be unrolled fully.

 

Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned
Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned

 

Similarly, our Senior Imaging Technician applies ‘digital black’ to make the image background a uniform colour. This is to hide any dust or uneven background and ensure the object is clear. If this is accidentally overused, it can make it appear that a chunk has been cut out of the scroll. Luckily this is easy to spot and correct, since we retain the unedited TIFFs and RAW files to work from.

 

Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll. It appears to have a large black line down the centre of the image.
Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll

 

Sometimes the scrolls are wonky, or dirty or incomplete. They are hundreds of years old, and this is where it can become tricky to work out whether there is an issue with the images or the scroll itself. The stains, tears and dirt shown in the images below are part of the scrolls and their material history. They give clues to how the manuscripts were made, stored, and used. This is all of interest to researchers and we want to make sure to preserve and display these features in the digital versions. The best part of my job is finding interesting things like this. The fourth image below shows a fossilised insect covering the text of the scroll!

 

Black stains: Or.8210/S.2814, panel 9
Black stains: Or.8210/S.2814, panel 9
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Fossilised insect covering text: Or.8210/S.6457, panel 5
Fossilised insect covering text: Or.8210/S.6457, panel 5

 

We want to minimise the handling of the scrolls as much as possible, so we will only reshoot an image if it is absolutely necessary. For example, I would ask a Senior Imaging Technician to reshoot an image if debris is covering the text and makes it unreadable - but only after inspecting the scroll to ensure it can be safely removed and is not stuck to the surface. However, if some debris such as a small piece of fluff, paper or hair, appears on the scroll’s surface but is not obscuring any text, then I would not ask for a reshoot. If it does not affect the readability of the text, or any potential future OCR (Optical Character Recognition) or handwriting analysis, it is not worth the risk of damage that could be caused by extra handling. 

Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.
Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.

 

These are a few examples of the things to which the HMD Digitisation Officers pay close attention during QC. Only through this careful process, can we ensure that the digital images accurately reflect the physicality of the scrolls and represent their original features. By developing a QC process that applies the best techniques and procedures, working to defined standards and guidelines, we succeed in making these incredible items accessible to the world.

Read more about Lotus Sutra Project here: IDP Blog

IDP website: IDP.BL.UK

And IDP twitter: @IDP_UK

Dr Francisco Perez-Garcia

Digitisation Officer, Heritage Made Digital: Asian and African Collections

Follow us @BL_MadeDigital

29 October 2021

Thought Bubble 2021 Wikithon Preparation

Comics fans, are you getting geared up for Thought Bubble? If you enjoy, or want to learn how to edit Wikipedia and Wikidata about comics, please do join us and our collaborators at Leeds Libraries for our first in-person Wikithon since this residency started, on Thursday 11th November, from 1.30pm to 4.30pm, in the Sanderson Room of Leeds Central Library.

Drawing of a person reading a comic and drinking a mug of tea

Joining us in person?

Remember the first step is to book your place here, via Eventbrite

If you’d like to get a head start, you can download and read our handy guide to setting up your Wikipedia account. There is advice on creating your account, Wikipedia's username policy and how to create your user page.

Once you have done that, or if you already have a Wikipedia account, please join our Thought Bubble Wikithon dashboard (the enrollment passcode is ltspmyfa) and go through the introductory exercises, which cover:

  • Wikipedia Essentials
  • Editing Basics
  • Evaluating Articles and Sources
  • Contributing Images and Media Files
  • Sandboxes and Mainspace
  • Sources and Citations
  • Plagiarism
  • Introduction to Wikidata (for those interested in this)

These are all short exercises that will help familiarise you with Wikipedia and its processes. Don’t have time to do them? We get it, and that’s totally fine - we’ll cover the basics on the day too!

You may want to verify your Wikipedia account - this function exists to make sure that people are contributing responsibly to Wikipedia. The easiest and swiftest way to verify your account is to do 10 small edits. You could do this by correcting typos or adding in missing dates. However, another way to do this is to find articles where citations are needed, and add them via Citation Hunt. For further information on adding citations, watching this video may be useful.

When it comes to Wikidata, we are very inspired by the excellent work of the Graphic Possibilities project at the Michigan University Department of English and we have been learning from them. For those interested in editing Wikidata we will be on hand to support this during our Thought Bubble Wikithon event.

Happier with a hybrid approach?

If you cannot join the physical event in person, but would like to contribute, please do check out and sign up to our dashboard. Although we cannot run the training as a hybrid presentation on this occasion, the online dashboard training exercises will be an excellent starting point. From there, all of your edits and contributions will be registered, and you can pat yourself firmly on the back for making the world of comics a better place from a distance.

However, if you can attend in person, please register for the Wikithon at Leeds Central Library here and check out the Thought Bubble festival programme here. Hope to see you there!

This post is by Wikimedian in Residence Lucy Hinnie (@BL_Wikimedian) and Digital Curator Stella Wisdom (@miss_wisdom).

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs