Technology in the hands of scholars, conservators, and archaeologists alike has long been central to the successful preservation and analysis of the Dead Sea Scrolls. While early technologies involved sticky tape for rejoining fragments and analog photography for their documentation, the advanced tools of today allow fragile scrolls to be read without even unwrapping them.
The breathtaking range of the scrolls spans everything from major texts, such as the Temple Scroll, to unopened phylactery cases with slips of hidden writing, to a small number of completely unopened scrolls. Although the glory of the collection is represented by the substantially complete and amazingly preserved copy of the Book of Isaiah on display at the Shrine of the Book in Jerusalem, its remarkable condition is the exception rather than the rule. According to the Israel Antiquities Authority (IAA), the scroll archive contains more than 25,000 fragments, many no larger than a postage stamp. Practically all of them consist of many layers, portions of a single scroll stuck together due to damage and decay.
The painstaking work of conservators has stabilized these fragments against further decay and provided a superb effort at physical restoration. In many cases, however, not much can be done, leaving thousands of fragments unstudied because of the difficulty and risk associated with invasive efforts to separate the multiple layers that stubbornly cling to each other.
Fortunately, researchers have developed non-invasive, digital restoration techniques, including “virtual unwrapping” that reveals the interior writings on rolled-up surfaces and multilayer fragments. Virtual unwrapping uses penetrating X-ray images to create a 3D model of an object. The 3D model data then passes through a series of steps that comprise the virtual unwrapping pipeline.
First, each individual layer on which writing may sit—each wrap of a scroll, for example—is identified and modeled. Every point on these segmented surfaces is then textured or assigned a brightness/intensity value corresponding to the density of that particular spot in the 3D model. Materials that are denser, such as certain kinds of ink, show up brighter than less dense materials, such as the animal skin often used as a writing surface. The software exploits this variation in density and brightness to make the text visible.
Because the model of the writing surface reflects the actual curvature of the scroll, it is then necessary to digitally flatten it for reading. This is accomplished through a material simulation, which is common in video games and movies for effects like cloth flags waving in the wind.
Virtual unwrapping is completely non-invasive, as X-rays induce no damage during imaging, and the analysis takes place on the data, not the physical object. The technique was successfully applied for the first time in 2015, when an ancient Hebrew scroll from Ein Gedi was safely revealed to be an early copy of the Book of Leviticus.1
This breakthrough technology is now being applied to the 25,000 fragments from Qumran. Among them is a multilayered fragment (1032a; see images above) with text concealed between a dozen stuck layers. Even this is now readable digitally. Perhaps even more exciting, recent approaches inspired by artificial intelligence (AI) have made it possible to enhance and make more precise the results from virtual unwrapping. As anyone who has ever broken a bone knows, the gray-scale imagery that results from an X-ray is not as compelling as a color photograph. But using a machine-learning framework, researchers can now show the gray-scale images in full color.
To achieve this “data-informed colorization,” the X-ray evidence of ink and parchment from deep inside a closed fragment is matched with a color photo of the visible portions of the fragment. A convolutional neural network (CNN) is then trained to build a map between the two imaging modalities. Whenever the CNN encounters a massive number of associations between the two kinds of imagery—color photography and X-ray—it can learn the conversion between the two types of data. This learned conversion makes it possible for virtually unwrapped fragments to look like color photographs.
From layer separation of a digitized manuscript, to ink identification, to digital flattening of pages, to virtual “recoloring,” this new AI technology produces digital images of unopened manuscripts that rival actual photographs of undamaged parchment texts. As the next step in a long line of technological advances, this forges a new pathway for restoration. Machine learning and AI will continue to push against the boundaries of what was previously considered impossible. Such innovations will support and inspire the next generation of scholars dedicated to the study of fragmentary, damaged collections.
Christy Chapman is Research and Partnership Manager for the Digital Restoration Initiative in the Department of Computer Science at the University of Kentucky.
W. Brent Seales is Gill Professor in the Department of Computer Science at the University of Kentucky and Director of the Center for Visualization and Virtual Environments. He focuses on digital study and restoration of inscribed artifacts.
1. Robin Ngo, “Book of Leviticus Verses Recovered from Burnt Hebrew Bible Scroll,” Bible History Daily (blog), July 21, 2015.
Sign up to receive our email newsletter and never miss an update.