This blog post is by Dr Adi Keinan-Schoonbaert, Digital Curator for Asian and African Collections, British Library. She's on Mastodon as @[email protected].
Last April I was part of a British Library delegation to China, which was a wholesome and fulfilling experience. It aimed to refresh collaborations and partnerships with the National Library of China and the Dunhuang Academy, explore new connections and strengthen existing ones with many other institutions and individuals. I will explore this trip from a digital scholarship lens, but you can read all about the trip and its larger aims and accomplishments in a post on the IDP blog by International Dunhuang Programme Project Manager, Anastasia Pineschi.
The Mogao Caves in Dunhuang
My primary objective was to attend and present at the IDP conference (19-20 April 2024), co-organised by the British Library and the Dunhuang Academy and synchronised with IDP’s 30th anniversary and the launch of a new, fresh and accomplished IDP website. Sharing our work and learning from others during this conference and the IDP workshop that took place the following day was one of my objectives. But I was also looking to reconnect with peers and getting to know new colleagues working in the fields of DH and the interchange of AI, cultural heritage and historical digital collections; explore opportunities for collaboration in the field of OCR/HTR (Optical Character Recognition, Handwritten Text Recognition); and get ideas for DH opportunities for IDP.
British Library and Dunhuang Academy colleagues in front of Mogao Cave 96 (Nine Story Temple)
Colleagues from the Dunhuang Academy showed us such outstanding hospitality, with our Dunhuang trip including many behind-the-scenes visits and unique experiences. These included, naturally, the extraordinary Mogao Grottoes, but also another cave site called the Western Thousand Buddha Caves, and stunning natural spots such as the Singing Sand Dune (Mingsha Mountain) and the Crescent Moon Spring. We also visited places such as the Digital Exhibition and Visitor Center, the Multi-field lab at the Dunhuang Studies Information Center, the Grottoes Monitoring Center and Conservation Lab, and the Dunhuang City Museum. All have left long-lasting impressions.
One of the dashboards managing the Mogao Grottoes at the Grottoes Monitoring Center
But let’s get back to the main purpose of this post, which is to report on some of the outstanding work happening out there at the intersection of Chinese historical collections and DH.
Conference (DH) Highlights
I’ll start with one of the earliest platforms to enable and encourage DH research in the context of Chinese works, the Chinese Texts Project. Dr Donald Sturgeon (Durham University) presented about this well-known digital library of pre-20th century Chinese texts, which started in 2005 and is still impressively active at present, being one of the largest and most widely used digital libraries of premodern Chinese texts. Crowdsourcing and AI are now used to enhance the texts available via this platform. Machine Learning OCR is used to automate transcriptions, automated punctuation is added through deep learning, and OCR corrections are done via a crowdsourcing interface. This sees quite a high volume of engagement, typically ca. 1,000 edits per day! Sturgeon also talked about the automated annotation of named historical entities in transcribed texts, as well as using deep learning to assert periods and dates, being able to transition between Chinese and Western calendars. These annotations can then turn into structured data – enabling linking up to other data.
Dr Donald Sturgeon presents about extracting structured data from annotations
While on the topic of state-of-the-art platforms, Prof Kiyonori Nagasaki (International Institute for Digital Humanities, Tokyo) talked about the SAT Daizokyo Text Database, a digital editing system for Buddhist canons and manuscripts using AI-OCR developed and recently released by the National Diet Library of Japan. The IIIF-compliant database of Buddhist icons annotated over 20,000 items, enabling search by various attributes. Nagasaki gave us a website demo, displaying an illustration with 400 annotations. One can search annotated parts of this image and compare images in the search results. Like the Chinese Texts Project, the SAT platform also incorporates crowdsourcing ‘editing’ with clever Machine Learning techniques. It was good to hear that there is an intention for SAT to gradually include Dunhuang manuscripts in the future.
Prof Kiyonori Nagasaki demonstrated how the interface interaction is facilitated by IIIF: clicking on the text bring up the right area in the IIIF-image
Another well-established, IIIF-based system, presented by Dr Hongxing Zhang (V&A Museum), is the Chinese Iconography Thesaurus (CIT). CIT has been an ongoing project since 2016, developed at the V&A and aiming to work towards subject indexing standard for Chinese Art. A system of controlled vocabulary is crucial to improve access to collections and linking up multiple collections. CIT focuses on Chinese iconography – motifs, themes, and subject matters of cultural objects, with almost 15,000 concepts and entities. And, it’s IIIF-supported – images and annotations can be viewed in IIIF Mirador lightbox.
Not just Chinese
While much of the work around Dunhuang or Silk Road manuscripts has to do with Chinese language, several scholars emphasised the importance of addressing other languages as well. Dunhuang manuscripts were written in languages such as Sogdian, Middle Persian, Parthian, Bactrian, Tocharian, Khotanese, Sanskrit, Tibetan, Old Uighur, and Tangut. Prof Xinjiang Rong (Peking University) emphasised the importance of providing transcriptions, transliterations and translations alongside digitised images. These languages require special language expertise; therefore, cooperation between institutions and scholars is crucial. Prof Tieshan Zhang (Minzu University of China) also urges researchers to address and publish non-Chinese Dunhuang manuscripts. He especially highlighted the importance of making better use of text recognition technologies for languages other than Chinese. Last year, the Computer Science department of Minzu University of China applied for a research project to do just that. They started with non-Chinese languages and aim to increase recognition accuracy to over 90%.
The talk by Prof Hannes Fellner (University of Vienna) came as a perfect example of how one could address the study of material in other languages, using computational methods. He introduced a project aiming to trace the development of Tarim Brahmi – one of the major writing systems of the Eastern Silk Road during the 1st millennium CE, which includes Khotanese, Sanskrit, Tocharian, and Saka. The project compiles a database of characters in Tarim Brahimi languages (currently primarily Tocharian), with palaeographic and linguistic annotations, presented as a web application. With the aim to create a research tool for texts in this writing system, such platform could facilitate the study of palaeographic variation, which in turn could help explore scribal identification, language development stages, and correlations between palaeographic and linguistic variations. Fellner works with Transkribus and IIIF to retrieve the coordinates of characters and words, returning the relevant ‘cut-outs’ of the photos to the web application. These can then be visualised, displaying character or word variations alongside their transliteration.
Prof Hannes Fellner shows how working with Transkribus and IIIF makes it possible to retrieve ‘cut-outs’ from photographs corresponding to the query string
Coming back to Chinese OCR/HTR, there’s quite a lot of activity in this area. I presented about work at the British Library aiming to advance Chinese HTR methods, in the wider context of the Library’s OCR/HTR work. We’ve focused on using the eScriptorium platform by collaborating with Colin Brisson (École Pratique des Hautes Études) and the French consortium Numerica Sinologica (now working on the READ_Chinese project). I talked about the work of our PhD Placement student, Peter Smith (University of Oxford), contributing to processes such as binarisation, segmentation and text recognition. I have recently presented about this work at Ryukoku University in Kyoto, and you can read more about it in Peter’s excellent blog post.
Dr Adi Keinan-Schoonbaert talking about OCR/HTR activities at the British Library
Dunhuang online platforms
It is crucial to embed such technologies and software into user-friendly platforms, where different functionalities are available for different types of needs and audiences. Dr Peter Zhou (University of California, Berkeley) talked about the importance of building a sustainable platform that can support the complete digital lifecycle, including data curation and management, long-term preservation, and dissemination. Zhou’s objectives for the Digital Dunhuang platform are to connect resources that are otherwise isolated, featuring uniform standards for data exchanges. Such platform must enable different kinds of data formats, including raw images, historical photos, videos, cave QTVRs, digitised texts and artifacts, reproductions, microfilm, interactive visuals, conservation data, spatial info, 3D modelling data, and immersive media. This Digital Dunhuang platform should be flexible, able to scale up and deal with mass content in different formats, have Machine Learning capabilities, and aggregating knowledge content through linking.
We can see many of these elements in a platform developed by the Dunhuang Academy. Xiaogang Zhang and Tianxiu Yu of the Dunhuang Academy introduced the Digital Library Cave platform (Digital Dunhuang), built in collaboration with Tencent, and its plans. The platform presents both a database of Dunhuang materials and murals, as well as a playable game focused on the narrative of the Library Cave. This platform displays an engaging, immersive mixture of 3D environments and artifacts, in addition to 2D items. The aim for the Digital Dunhuang platform is to present digital resources relating to the Mogao Grottoes in one integrated and comprehensive resource for Dunhuang studies. (Side note: access to the database requires a login and input of personal data).
Tianxiu Yu showing a Knowledge Graph connecting different types of data resources
The richness and variety of data available now and in future on this platform is remarkable. The entire cliff of the Mogao Grottoes and some of the large-scale cultural relics are available in 3D, and this is complemented by other data used in conservation and research. And there’s an impressive array of AI technologies applied to both images and texts. For images, murals dataset annotations and automatic object detection would allow for search and retrieval; AI used for image enhancements for old photos; line drawing are extracted from art scenes; and image stitching automation. For texts, functionalities will include, at a later stage, character text recognition, providing full text retrieval at 90% precision rate; Traditional to Simplified Chinese conversion; automatic punctuation; entity extraction; and the creation of knowledge graphs. When completed, this platform will be open and share all resources available online.
With a solid focus on text retrieval and analysis, Dr Xiaoxing Zhao (Dunhuang Academy) presented about the Dunhuang Documents Database, collating digitised manuscripts and prints dating from the 4th to the 11th centuries discovered in the Library Cave at Mogao, Dunhuang. Providing full-text retrieval for Chinese, Tibetan, and Uighur (and a plan to add Tangut), it includes search functionality using keywords, and features transliteration in Traditional Chinese, which can be conveniently viewed alongside the image. It’s great to see how far AI text recognition has come!
Dr Xiaoxing Zhao demonstrating the Dunhuang Documents Database’s transliteration in Traditional Chinese, which can be seen side by side to the image
However, technological advances are not just restricted to AI and Machine Learning. Prof Simon Mahony (Emeritus Professor, UCL) gave a fascinating, image-rich talk about non-invasive and non-destructive computational imaging of ancient texts. Mahony introduced different techniques to address research questions arising from textual manuscripts. These methods allow, for example, reading illegible texts and seeing artworks, determining the composition of pigments, or detecting characteristics of ink. One of the projects that he was involved with was the Great Parchment Book project. Damaged in a fire, the book’s content became inaccessible for researchers – but a series of steps taken to digitally straighten, flatten and stretch the book, turned it back to a readable state. This and other computational methods applied to images are indeed very inspirational!
Prof Simon Mahony talking about how computational methods were used to enable the reading of the text in the Great Parchment Book project
Back to Beijing
Coming back to Beijing, we had several visits such as the National Library of China and the Palace Museum’s Conservation Department. But I’ll focus here on two visits which are directly related to DH and computational methods – the first at the Chinese Academy of Sciences (CAS), and the second at the National Key Laboratory of General Artificial Intelligence, Peking University.
We were kindly hosted by Prof Cheng-Lin Liu from the State Key Laboratory of Multimodal AI Systems (MAIS), Institute of Automation, CAS, and joined by Drs Fei Yin, Heng Zhang, and Xiao-Hui Li. Prof Liu gave an excellent keynote talk at the Machine Learning workshop at the ICDAR2023 conference, which I attended in August 2023. It was about “Plane Geometry, Diagram Parsing and Problem Solving,” which well exemplifies MAIS’ areas of work. It is a national platform specialising in document analysis, computer vision, robotics, Machine Learning, Natural Language Processing (NLP), and medical AI research – the first to start Pattern Recognition research in China, and one of its main AI research centres. We enjoyed an excellent exchange – and a fruitful discussion.
MAIS and British Library colleagues at the CAS offices in the Haidian District, Beijing
From there, we travelled to Peking University for another stimulating knowledge exchange meeting with Prof Jun Wang, Director of the Research Center for Digital Humanities (PKUDH) and Vice Dean, Artificial Intelligence Institute, joined by Dr Qi Su, Dr Pengyi Zhang, Dr Hao Yang, Honglei San, Kairan Liu, and Siyu Duan. We watched videos of two Shidian platforms – open access web platforms for reading, editing and analysing ancient Chinese books, developed through a partnership between PKUDH and the Douyin Group. One platform is the Open Access Ancient Book Reading Platform, and the second is the AI-powered Ancient Book Collation Platform. The AI-empowered editing and compiling system includes an impressive array of functionalities.
Screenshot from the YouTube video, showing features of the Shidian reading platform
Our session also included presentations and discussions around topics such as AI character reconstruction, cultural heritage curation and crowdsourcing, automatic text annotation and linked data. For example, PhD student Siyu Duan (supervised by Prof Su Qi) presented about dealing with ancient ideograph restoration, including a little experiment on Dunhuang data that showed suggested restoration of damaged or illegible characters. The whole session was an absolute delight!
I am so grateful for everyone generosity and hospitality – I have learned so much, so thank you. Until next time!
Dr Adi Keinan-Schoonbaert enjoying the dunes and the Crescent Moon Spring, Dunhuang