A Decade of Reinventing Transcription on the Zooniverse Platform

Over the past two decades, crowdsourcing has emerged as a viable and effective tool for creating transcriptions from digitized images of physical source material such as handwritten manuscripts. Many early crowdsourced text transcription efforts were one-off projects, but swelling demand led to the development of several online platforms, including Zooniverse , the largest virtual platform for crowdsourced research. Since 2010 more than 75 transcription projects have been built on different iterations of the Zooniverse codebase and platform, testing different approaches and tools for transcription. Many Zooniverse platform users have told their own story through peer reviewed publications and gray literature (Brohan 2019; Brusuelas, 2010; Deines et al, 2018), but the genesis and iterative development of the platform has yet to be described. We will address this lacuna by describing three common challenges that informed the theories and practices underpinning the development of transcription tools on Zooniverse. The paper serves as a reflection on responsible resource management and collaboration in DH

Crowdsourcing is by nature deeply collaborative, and requires significant design and planning in order to be done ethically, e.g. by not wasting volunteers’ time; being transparent about methods; producing open-access results, and supporting diverse teams and volunteers in using the platform. We aim to provide an example for anyone using, creating, or maintaining technical products used for data production in DH, by demonstrating how the history of technical infrastructure and DH resources provide critical context for interpreting project results.  

In this short paper, we will present ongoing research into the history of Zooniverse text transcription projects from 2010 to the present, including the methodological shift away from bespoke development after the 2015 launch of the free-to-use Project Builder (PB), which allows team to create their own projects and connect with Zooniverse volunteers. We will reveal how generations of project teams composed of Zooniverse researchers and diverse academic and cultural heritage partners tested different approaches to text transcription, and adapted, reused, and refined methods to amplify successes and mitigate challenges or shortcomings of previous approaches. 

This discussion of Zooniverse transcription tools highlights the innate challenges of designing for distributed transcription by multiple volunteers on a platform that was constantly adapting to meet increasing demand for crowdsourcing tools from many disciplines and practitioners (peaking in 2020 in response to physical institutional closures, due to the Covid-19 pandemic), a growing volunteer base, and constant advances in web development (Samuel, 2021). Through a literature review, content analysis, and our work with teams who have used the platform since 2010, we identify three key challenges across Zooniverse transcription projects: 

1. Variety:

Text data is complex no matter how it is transcribed or tagged. Original documents can be written in diverse languages and scripts, and layout can be structured (e.g. forms or tables), unstructured or semi-structured. Decisions about whether transcribers should produce diplomatic, semi-diplomatic or regularized transcriptions and how to communicate conventions succinctly online can be even more complex when working with distributed volunteers than with scholarly editing teams. These factors make it extremely difficult to design a single transcription approach that can be used for many types of text. 

2. Units of classification and completeness metrics:

Zooniverse transcriptions can be broken down at the level of a page, paragraph, sentence, line, or character, but until 2018 the unit of classification for the purposes of aggregation was almost always the page/image (Blickhan et al, 2019). Different projects have designed the unit of transcription radically differently for various reasons: to increase user confidence, lower barriers to participation, and in the hopes of easing automated aggregation, but these decisions sometimes compounded the complexity of the resulting data. We will discuss examples of different approaches, their affordances and often unanticipated challenges, as well as successive efforts to tweak the tools and platform to enhance data quality, and improve the user experiences of volunteers and project teams.

3. Aggregation:

Data aggregation is the foremost challenge for text transcription projects on the Zooniverse platform, which gathers multiple “classifications” or assessments per page or image, compares them, and seeks a majority assessment. For the first Zooniverse project, Galaxy Zoo , volunteers were asked simple multiple choice questions, and aggregation was fairly straightforward (Lintott et al). For text, however, the difficulty again comes with the issue of units. If a page of text is broken down into distinct sections or units, a major challenge is grouping, or clustering, the positional data that tells the platform back-end which transcriptions refer to the same unit, so the appropriate units of text can be aggregated together. Even if a page is broken down into physical lines of text, each line will contain multiple words, and slight differences in volunteers’ transcriptions, i.e. spelling and punctuation, can affect the quality of the result. The challenges of aggregating highly variable textual data amplifies typical skills gaps between academic disciplines or those who code in their line of work, and those who don’t, but the challenges of aggregating text cannot be solely attributed to disciplinary differences (Van Hyning, 2019).

This short paper will provide brief examples of the above challenge categories, and illustrate lessons learned through each developmental stage. We hope that feedback from this session will help to guide our future research efforts, by suggesting how we might keep refining sample datasets and documentation aimed to guide Zooniverse transcription project creators through all stages of project design and working with their resulting data. 

This historical overview of text transcription on the Zooniverse platform will clearly matter for current and future project creators using the PB, and for anyone working with transcription data produced on Zooniverse in the bespoke or PB instances. This paper could help teams trying to describe the theoretical and methodological underpinnings of platform they used to gather their data, and provide crucial context for those trying to reuse the data for new purposes. We believe our findings are also applicable to DH practice more broadly: we argue that the collaborative methods of adapting and sustaining existing technologies, and refining our documentation and understanding of the platforms we create and the resulting data is ultimately more impactful than continuously prioritizing new builds and tools.

Appendix A

Bibliography
  1. Blickhan, S.; C. Krawczyk; D. Hanson; A. Boyer; A. Simenstad; and V. Van Hyning, 2019. “Individual vs. Collaborative Methods of Crowdsourced Transcription,” Journal of Data Mining and Digital Humanities 2019, Special Issue on Collecting, Preserving, and Disseminating Endangered Cultural Heritage for New Understandings through Multilingual Approaches. https://doi.org/10.46298/jdmdh.5759
  2. Brohan, P., 2019. Transcription_methods_review. https://github.com/philip-brohan/transcription_methods_review
  3. Brusuelas, J. “Ancient Lives: A Final Report.” Ancient Lives (blog), November 5, 2019. https://ancientlives.blog/2019/11/05/ancient-lives-a-final-report/  
  4. Deines, N., et al., 2018. “Six Lessons from Our First Crowdsourcing Project in the Digital Humanities.” The Getty (blog), February 17, 2018. https://www.getty.edu/news/six-lessons-learned-from-our-first-crowdsourcing-project-in-the-digital-humanities/    
  5. Lintott, C.; K. Schawinski; A. Slosar; K. Land; S. Bamford; D. Thomas; M. Raddick; R. Nichol; A. Szalay; D. Andreescu; P. Murray; J. Vandenberg, 2008. “Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey,” Monthly Notices of the Royal Astronomical Society , 389:3, pp. 1179-1189, https://doi.org/10.48550/arXiv.0804.4483 .  
  6. Samuel, S. “Citizen science is booming during the pandemic.” Vox , April 18, 2021. https://www.vox.com/future-perfect/22177247/citizen-science-amateur-backyard-birding-astronomy-covid-pandemic  
  7.   Van Hyning, V., 2019. “Harnessing crowdsourcing for scholarly and GLAM purposes.” Literature Compass , 16:3-4, https://doi.org/10.1111/lic3.12507 .
Victoria Van Hyning (vvh@umd.edu), University of Maryland, iSchool, United States of America and Samantha Blickhan (samantha@zooniverse.org), Zooniverse, Adler Planetarium