*
 
 
Lectures & Conferences
FindIt:

6th Annual Lawrence J. Schoenberg Symposium on Manuscript Studies in the Digital Age

Thinking Outside the Codex

November 21-23, 2013


Symposium Abstracts

(in program order)

 

Elaine Treharne, Stanford University

Invisible Things

This paper will investigate, through the lens of phenomenology, those elements of a manuscript book that are not visible in our text-centered perspective. My foci will include the manifold interpretative potential of blankness on the manuscript folio, the extensity that yields the voluminous heft of the codex, the torn out pages of a quire, the trimmed margins, and intergraphemic space. My argument is that in order to fully understand the manuscript book, we need to look, in a literal sense, outside the words and images we traditionally regard as the only semantic containers.

 

Benjamin Fleming, University of Pennsylvania

Material Forms of Asian Manuscripts: Examples from Penn's Indic Collection

The material forms of medieval manuscripts in Asia differ notably from those in medieval Europe, and the history of printing and books also unfolded quite differently in both East and South Asia. To understand the history of text-production across Asia, in fact, one must try to "think outside the codex" almost completely and distinguish our modern idea of the "book" from the codex-like formats common in the West. This paper will explore the different perspectives offered by the material culture of Asian texts through the lens of Penn's Indic collection, on the one hand, and the Ramamala Library in Bangladesh, on the other; in the process, it shares some of the results of two in-progress projects to preserve the latter and reflects on the special challenges and opportunities of digitizing these materials.

 

Linda Chance, University of Pennsylvania, and Julie Nelson Davis, University of Pennsylvania

WORKSHOP I: The Handwritten and the Printed: The Limits of Format and Medium in Japanese Premodern Books

In this workshop we will consider some of the challenges that the study of the history of the book in Japan present. Among the questions we will broach are: what is the relationship between the handwritten and the printed? How were the mediums and formats of book production important in production and reception? What are the presumed virtues of print and the limits of manuscript? What was the impact of moveable type in Japan? What is the relationship of text and image in the calligraphic tradition? What makes a text legible (and how do you read an illegible text)? How can scholars learn difficult script styles? How can we train a new generation of scholars to read original manuscripts? How have digital tools affected the ways that we collaborate on reading outside the corpus of transcribed manuscripts?

 

Kathryn Rudy, University of St. Andrews

Dirty Books: Quantifying user's responses to medieval manuscripts with analogue and digital methods

When medieval readers used manuscripts, they left fingerprints, stains, and other signs of wear in them. This paper seeks to quantify those stains in individual books in order to reveal which parts were intensely read and which parts ignored. Measuring dirt provides a concrete way of gauging reception.

Tim Stinson, North Carolina State University

Gamelyn’s Heirs: (In)completeness and Middle English Literature

The Tale of Gamelyn owes its popularity to what it is not; both its survival in twenty-five medieval manuscript copies and the critical attention devoted to it in subsequent centuries must be attributed primarily to its spurious status as one of Chaucer’s Canterbury Tales. It is fair to say that much of the interest in the poem has been editorial in nature from the time of Chaucer’s scribes right up to the present. The poem was likely introduced into the corpus of The Canterbury Tales in order to remedy problems of incompleteness, both of the fragmentary Cook’s Tale and of the larger group of tales forming the whole work. This essay uses The Tale of Gamelyn as a starting point for a broader consideration of incompleteness, a problem endemic to the editing and study of Middle English literature. A range of factors, including damaged manuscripts, scribal interventions and omissions, authorial revisions, and unfinished works, make incompleteness the norm rather than the exception and present significant challenges to editors and readers of Middle English works. We are heirs to the problems encountered by the book producers who first introduced Gamelyn into The Canterbury Tales; our collective desire for complete, canonical works is routinely frustrated by the fragmented state of much of our material and textual evidence. I offer an overview and critique of methods derived to accommodate incompleteness in printed editions of medieval works as well as a meditation on how these problems manifest themselves in new ways in an era of electronic editing and archive building.

 

Jim Ginther, St. Louis University & T-Pen

WORKSHOP II: Demo Workshop for T-Pen: Transcription for paleographical and editorial notation

In this session, we will do a walk-through of the new tool, T-PEN (Transcription for Paleographical and Editorial Notation). I will demonstrate how to select a manuscript, create a transcription project, set up the tools to assist in transcription (including encoding options), and then walk through some transcription.  Then I will demonstrate how you get your transcription data out of T-PEN so you  can deploy it in other applications or projects. Finally, I will give a brief overview of a new tool in development, Tradamus, which will assist scholars of pre-modern texts who are creating critical editions.  There will be time for Q&A and for attendees to take T-PEN for a test drive themselves.


Marie Turner, University of Pennsylvania

Forms of History: The Case of University of Pennsylvania MS Roll 1066

This paper considers the relationship between genealogy, history, and material form in late medieval historiography, taking as its starting point University of Pennsylvania MS Roll 1066, a Latin genealogical chronicle roll of the kings of England to Edward IV, likely produced in London on the occasion of Edward’s coronation in 1461. Part of a large but understudied corpus of fifteenth-century genealogical rolls, this manuscript and its others provide us with a snapshot of English historiographical anxiety at a crucial political juncture: the Wars of the Roses. While the propagandistic function of these rolls seems fairly clear – to provide a means by which Edward could publically claim descent from the legendary Brutus, great-grandson of Aeneas – such an ideological project is far from straightforward, requiring the complete breakdown and reconstruction of history itself as Edward’s line of descent becomes a fiction of genealogical power/desire. This paper looks at these rolls from two perspectives: first, I consider the continuities and discontinuities inherent in their project, asking how MS Roll 1066 reorganizes and manipulates history in unexpected ways, using the past to account for the present and making claims about the present as a way of understanding the past. Second, I will look to how the roll format mediates content, asking how constraints of form demand new and inherently multiple modes of readership within a single document, deconstructing the very lineal model of history it performs.

I will conclude by introducing a new digital humanities project I am currently beginning under the auspices of the Schoenberg Institute for Manuscript Studies that will see the creation of an online scholarly resource on fifteenth-century genealogical chronicles and the development of a web application for viewing non-codex manuscripts that takes into account the unique challenges of the roll format.

 

David McKnight, University of Pennsylvania

Traces on a Silicon Chip: The future of online scholarship - Lessons from the digital past

The full-scale adoption of computing for use in Humanities and Social Sciences research is merely half a century old.  One of the manifestations of the computing revolution is the ubiquitous migration of texts and cultural information from stone, paper, and other organic formats into digital objects. In this paper I will provide an historic overview of 20th-century attempts to adopt modern “copying” technologies beyond the handwritten or printed facsimiles which have served scholars for centuries. From the introduction of photography at the British Museum at the beginning of the 20th century to Vanevar Bush’s notion of the Memex in the 1940s, to early experiments with automatic page turners in the 1950s, to commercial experiments in data capture in the 1980s to, finally,  21st-century web-based capture and dissemination systems – all of these attempts at creating facsimiles provide both solutions and problems in terms of issues relevant to the Museum, Library and Archive communities: authenticity, feasibility, accuracy, use and long term preservation. I will conclude with a discussion of the ideal of digital cultural transmission on a global scale and whether this goal is feasible and sustainable and at what price.  



Evyn Kropf, University of Michigan

Will that surrogate do? Reflections on digitally mediated collaborative description for Islamic manuscripts at the University of Michigan

Work with manuscript surrogates is certainly not a new phenomenon, and historically, researchers have relied on photographs, microfilm, photocopies, and other analog surrogates to conjure particular manuscript features when limited resources, restricted access, condition issues, and other concerns place the physical artifact out of reach.

Nevertheless, material characteristics - the particular concern of codicologists and indeed of all researchers interested in addressing the essential historical context of a manuscript’s text and ornament - are notoriously poorly mediated by the typical analog surrogate. Researchers not privileged to examine a codex themselves must rely on available published descriptions, if such descriptions address these codicological features at all.

Digital surrogates have the potential to improve on the analog, but only if properly designed and deployed. What can the typical page image surrogate mediate, in expert hands and out? What a surrogate lacks, can a text-based description help mediate? How does proliferation of inadequate surrogates influence scholarly practice? Where scholars are content to neglect the material context, do we have a responsibility to suggest to them otherwise, perhaps through better presentation of digital surrogates?

This paper considers these questions in light of the recent outpouring of Islamic manuscript digitization projects and in particular the work of "Islamic Manuscripts at Michigan," a collaborative cataloguing project that relied heavily, but not exclusively, on interaction with digital manuscript surrogates to produce a database of rich bibliographical and codicological data. Preliminary results from an ongoing study shed further light on current digital manuscript surrogate use and their impact on scholarship.


Christopher Blackwell, Furman University & The Homer Multitext

WORKSHOP III: Scholarship Outside the Codex: Citation-based digital workflows for integrating objects, images and text without making a mess

The Homer Multitest (HMT) is a collaboration among several institutions, involving scores of professional and undergraduate collaborators creating digital editions of Byzantine manuscripts. Our data model follows the CITE architecture for texts, data, and images, and within that, our model for “texts” follows the OHCO2 model, defining a text as an “ordered hierarchy of citation objects”. This model allows us to use diverse expressions of our data, according to the functional needs, while confirming the data’s integrity. In other words, to take texts as an example, we can edit texts as TEI-XML files, convert them to tabular format for further processing, express their contents as RDF triples, and re-assemble the RDF into TEI-XML for reformatting and display. At the moment, the contents of the HMT, when encoded as RDF, consist of 1.5 million statements. Working in a version-control system, our collaborators subject their individual contributions to automated tests that validate XML files, confirm that Greek texts use only approved Unicode charcters, confirm that each “token” in a text either parses as a legitimate Greek word, or is otherwise identified, confirm the integrity of both sides of each index, and so on. This suite of tests is called “HMT Mandatory Ongoing Maintanence”, or “HMT MOM”. When their data passes all tests, their commits are merged with the canonical body of HMT data, also under version-control. A nightly scheduled, automated tasks recompiles the HMT into RDF, and re-loads it into a triplestore. The CITE Servlet web-application implements the CITE services by querying that triplestore and constructing the results into valid, namespaced XML reponses, which are formatted by some XSLT and CSS for human-readable display. For the first part of this workshop, we will walk through this process from end to end, from creation of new text through online display for end-users. The second part will be reserved for discussion.

Benjamin Albritton, Stanford University, and Rob Sanderson, Los Alamos National Laboratory

SharedCanvas: Dealing with distributed resources and collaboration in digital facsimiles

To date, digital manuscript projects have often reflected the codices and libraries they purport to represent online: tightly bound, conceived as individual volumes or collections produced by a single author or group of authors, offering users a limited and restricted set of tools and interactions with their contents.  The reality of working with digital resources, however, requires new approaches to handling digital medieval resources in order to create rich and dynamic presentations of digital medieval objects.

Interoperability of medieval data - facilitated by data models like SharedCanvas and community-adopted protocols like the International Image Interoperability Framework - opens up the possibility of building intricate and evolving views of manuscript materials that challenge our notions of control and curation.  As crowd-sourced annotations, digital editions, electronic publications, student transcriptions, scholarly conversations, and machine-generated analyses become part of the information "about" an object, and that data can live anywhere on the web, we are forced to respond to questions of ownership, authorship, provenance, and quality (among other concerns) in new and creative ways.

This paper will address some of the considerations, problems, and potential new directions for dealing with manuscript content in a rapidly shifting landscape, focusing on the SharedCanvas data model and its use by libraries and repositories, scholarly projects using manuscript materials, and the software designed to facilitate that work.

Martin Foys, King's College London & Drew University

Small Data and Multiple Hosts: Bespoke Targets & Annotations in the Virtual Mappa Project

This talk will survey the current work of the Virtual Mappa Project (VMP), a partnership between the DM Project and the British Library to establish how medieval maps of the world and related geographic texts may be collected, studied, and annotated by scholars. In particular, this project emphasizes the continuing need within digital humanities resources for scholars to be able to generate bespoke scholarship - custom and targeted linked data of moments within images and texts - across large collections. The presentation will showcase the first instance of the DM Project's new multi-up working environment, with innovative methods for managing the display, selection, and annotation of several manuscript images and transcribed texts simultaneously by collaborating users within the same project. It will also review newly completed work for exporting the linked data created by users in Open Annotation Collaboration (OAC) compliant XML and RDF-triple formats. Finally, this presentation will discuss the next phase of work already in development - establishing customizable collections of such annotated materials, drawn from manuscript manifests hosted across multiple institutional repositories, as well as look at other developing applications of the DM Project for more general manuscript study.

Dot Porter, University of Pennsylvania, and Doug Emery, University of Pennsylvania

WORKSHOP IV: Of Apples and Apple Pie: Exploring the relationship between raw data and digital scholarship

In this workshop, focusing specifically on digitized medieval manuscripts, we will look at several examples of raw data, interfaces to that data, and digital scholarship. The aim is to provide an overview of different approaches that institutions and individuals are taking to creating and using data. We will also exploring the path that data takes from being a raw ingredient to becoming part of a piece of scholarship - as an apple is sliced and mixed with other ingredients to make a pie. But is there really such a path for data? Can "raw data" - digitized images, descriptive metadata, catalog records - itself be a form of digital scholarship? Do interfaces - online catalogs and page-turning environments - just present data "as it is" or can they help us understand the data and make critical arguments? And is "apples and apple pie" a fair analogy to "raw data and digital scholarship", or is there a better analogy to be found?