If you want to see a manuscript that is housed in Berlin and you are located in Montreal, libraries have a few ways of showing you the pages online. You can scroll through thumbnail images, go page-by-page, or download a very large PDF file.
These options are less than ideal: going through each thumbnail individually is slow and cumbersome, and the PDF will either take up a lot of space and load very slowly, or the images will be tiny, causing small details on the manuscript images to be lost.
Diva is a free and open-source technology that takes an improved Google Books approach to viewing digital book images. With Diva the user can scroll through the document page by page, but then zoom in and out on the pages to see more or less detail. There is no need to download anything, Diva works right on the web browser.
Diva works by breaking large images up into tiny squares that load in the user’s browser as required, like viewing a map of the world in Google Maps. Since we are usually only interested in a small part of the file, Diva only downloads the parts of the book the user is viewing, with no need to download a large PDF file.
There are a number of viewing options within Diva: the user can zoom out and go through the entire text quickly, before choosing a page to stop on to view details; or the user can choose grid view for an even faster look at the text. With grid view, one can adjust how many pages appear per row. Once on the chosen page, the gear icon allows the user to adjust brightness, contrast, and rotation—so one can read marginalia easily, for example. The zoom feature even works while manipulating the image.
While Diva is not specific to any one type of book, it is especially helpful for manuscript viewing. The details that it allows are astounding: we can see tiny erosions in the paint. Or we can tell when the artist is experimenting with depth, since the shading and shadows on a tiny flower bud become visible.
Diva is developed at McGill in the Distributed Digital Music Archives and Libraries laboratory, headed by Dr. Ichiro Fujinaga. The project has been funded by a number of partners, including the Swiss National Science Foundation, The Social Sciences and Humanities Research Council, and the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT).