The Digital Michelangelo Project

Figure 1: Our motorized gantry positioned in front of Michelangelo's David. From the ground to the top of the scanner head is 7.5 meters.

Marc Levoy
Computer Science Department
Stanford University

May 28, 2003
revised November, 2003 for inclusion in the book
Exploring David, Giunti Press, March 2004

Figure 2: The red stripe generated by our laser triangulation scanner sweeps across the face of the David. By analyzing these sweeps, we digitized the David with a spatial resolution of 0.29mm.


Introduction

Recent improvements in laser rangefinder technology allow us to reliably and accurately digitize the external shape and surface characteristics of many physical objects. Examples include machine parts, cultural artifacts, and design models for the manufacturing, moviemaking, and video game industries. As a demonstration of this technology, I and a team of 30 faculty, staff, and students from Stanford University and the University of Washington spent the 1998-99 academic year digitizing the sculptures and architecture of Michelangelo. During this time, we scanned the David, the Unfinished Slaves (Atlas, Awakening, Bearded, and Youthful), the St. Matthew, the four allegorical statues from the Medici Tombs (Night, Day, Dawn, and Dusk), the architectural interior of the Tribune del David in the Galleria dell'Accademia, and the architectural interior of Michelangelo's New Sacristy in the Medici Chapels. The goals of this project were scholarly and educational. Our sponsors were Stanford University, Interval Research Corporation, and the Paul G. Allen Foundation for the Arts. In this article, I describe the technological underpinnings, logistical challenges, and possible outcomes of this project.

Technological underpinings

From a technological standpoint, the Digital Michelangelo Project consisted of two components: a collection of 3D scanners and a suite of software for processing the data that they return. Our principal scanner was a laser triangulation rangefinder mounted on a motorized gantry, built to our specifications by Cyberware Inc. of Monterey, California. Figures 1 and 2 show our scanner standing next to the David. Although our scanner was specialized for digitizing large statues, the principles of laser triangulation scanning are fairly universal, and easy to explain. The scanner emits a thin sheet of red laser light, which illuminates an object (for example a statue), painting a stripe onto it. A video camera looks at the object from the side. By analyzing the image seen by the camera, and by knowing the position and orientation of the laser and the camera, one can use trigonometry to determine the location in space of each point on the stripe. This process, called triangulation, is the same one used in laser surveying, except that we are performing it at once on many points, rather than at only one point.

By sweeping the laser stripe across the object, we obtain a full record of the three-dimensional shape of the object, at least as seen from one side. The resulting data takes the form of a grid of points in three-dimensional space, sometimes called a range image. By connecting neighboring points, we produce a mesh of tiny triangles. Figure 3 shows such a mesh from the face of St. Matthew. By scanning the object repeatedly from different angles, and then combining the resulting range images, we produce a new, larger triangle mesh that completely describes the object, at least to the extent we can see it from the outside. In our scanner, the triangles are about 1/4 millimeter on a side, small enough to capture Michelangelo's chisel marks. After we've finished recording the shape of the object, we scan it a second time, this time illuminated by a white spotlight mounted on our scanner, and we use a second camera to record the object's color.

The second technical component of our project was a suite of software for processing the range and color data returned by our scanner. Our range processing pipeline consisted of aligning the scans taken from different gantry positions, combining these scans together using a computer algorithm developed at Stanford University, and filling holes using another algorithm we developed. Figure 4 shows the result of merging 104 range images of St. Matthew. Holes arise in our models because there are crevices a sculptor can reach with a hammer and chisel that we cannot reach using our laser scanner and camera, which make an angle of 20 degrees with respect to one another. Of course, holes cannot be filled with the correct geometry of the statue, which we couldn't see, but the invented geometry is usually plausible, and the resulting mesh of triangles is now watertight. For scientific applications, we typically label our hole fill geometry, to distinguish it from geometry we actually observed.

Our color processing pipeline consists of compensating for ambient lighting, discarding pixels affected by shadows or specular reflections, and factoring out the dependence of observed color on illumination by our spotlight. Since by this point we have a three-dimensional model of the surface, we know its location and orientation relative to our spotlight, so we can easily remove this dependence. The result of our range and color processing pipeline is a single, closed, irregular triangle mesh with an RGB (red, green, and blue) reflectance triplet at each vertex. Figure 5 shows a computer-generated rendering of the head of St. Matthew with coloring from our photographic data.

Non-photorealistic renderings of our datasets are also possible. For example, by coloring each vertex of a mesh according to its accessibility to a virtual probe sphere rolled around on the mesh, a visualization is produced that seems to show the structure of Michelangelo's chisel marks more clearly than a realistic rendering. Figure 6 shows an example of this type of rendering, again using St. Matthew. We believe that the application of geometric algorithms and non-photorealistic rendering techniques to scanned 3D artworks is a fruitful area for future research.

Logistical challenges

The Digital Michelangelo Project was as much a production project as a research project, and we therefore faced logistical challenges throughout the project. One significant challenge we faced was the size of our datasets. Our largest dataset is of the David, computer renderings of which are shown in figure 7. It was acquired over a period of 4 weeks by a crew of 22 people scanning 16 hours per day 7 days a week. The dataset contains 400 individually aimed scans, comprising 2 billion polygons and 7,000 color images. Losslessly compressed, it occupies 60 gigabytes of computer storage. Although most of the techniques used in this project were taken from the existing computer graphics literature, the scale of our datasets precluded the use of many published techniques, and it forced us to modify or re-implement other techniques.

A second logistical challenge we faced was insuring safety for the statues during scanning. Laser triangulation is fundamentally a non-contact digitization method; only light touches the artwork. Nevertheless, the digitization process involved positioning a scanner close to a precious artwork, so accidental collisions between the scanner and the statue were a constant threat. To prevent collisions, we used a combination of scanner design features - in particular a long standoff distance and pressure-sensitive motion cutoff switches, as well as safe operating procedures and extensive training of our scanning crew. To reduce the chance of damage in case of inadvertent contact, our scan head was encased in foam rubber.

Uses for our models

The first question people ask us about these models is whether we created them in order to make copies of the statues for sale. This wasn't one of our goals. However, our technology certainly could be used to scan and replicate statues. Among the other clients we envision for these models are art historians, museum curators, educators, and the public.

For art historians, our methods provide a tool for answering specific geometric questions about statues. Questions we have been asked about Michelangelo's statues include computing the number of teeth in the chisels employed in carving the Unfinished Slaves, determining the smallest size block from which each of the allegories in the Medici Chapel could have been carved, and determining whether the giant statue of David is well balanced over his ankles. Aside from answering specific questions like these, art historians envision computer models becoming a repository of information about specific works of art.

For educators, computer models provide a new tool for studying works of art. In a museum, we see most statues from a limited set of viewpoints. Computer models allow us to look at statues from any viewpoint, change the lighting, and so on. In the case of Michelangelo's statues, most of which are large, the available views are always from the ground looking up. Michelangelo knew this, and he designed his statues accordingly. Nevertheless, it is interesting and instructive to look at his statues from other viewpoints. Looking at the David from unusual directions has taught us many things about the statue's ingenious design, as shown in figures 8 and 9.

For museum curators, while models displayed on a computer screen are not likely to replace the experience of walking around a statue, they can nevertheless enhance the experience. To explore this idea, we were recently invited by the Galleria dell'Accademia to install an interactive kiosk next to the David timed to coincide with with the statue's restoration. Although it may seem ludicrous to place a computer in front of a statue, and on the computer screen to display a 3D model of that same statue, we have actually found that the computer focuses visitors' attention on the statue and allows them to view it in a new way. By exploring the statue themselves, they turn the viewing of art into an active rather than a passive experience. The art museum becomes a hands-on museum.

Finally, for the public, we think that interactive viewing of computer models may eventually have the same impact on the plastic arts that high-quality art books have had on the graphic arts; they give the educated public a level of familiarity with great works of art that was previously possible only by traveling.

Distributing our data

To make these uses possible, we need to distribute our computer models. Indeed, one of the primary goals of our project was to create and disseminate an archive of 3D scanned models. Although our archive is still incomplete, we have decided to make what models we have created, and our entire corpus of raw range data, available to the scholarly community. The URL of this archive is http://graphics.stanford.edu/data/mich/. The models in this archive are available to anyone, but for scientific use only, and users must first obtain a license in writing from Stanford University.

For the general public, distributing our models is more problematic. Italian intellectual property law permits us to license our 3D models to scholars for scientific use - which we have been doing for several years, but it prohibits us from freely distributing the models, for example on the Internet. To address this problem, we have developed a "remote rendering system". Anybody with a personal computer can download a program that contains a low-quality version of our model. This permits them to navigate around the statue, but the images they see are lacking in fine detail. When they release the computer mouse, instructions are sent to Stanford where an image of the statue, seen from the same viewpoint and with the same lighting, is generated using our high-quality model. This image is sent back to the user, where it overwrites the low-quality image on their computer screen. In this way, the general public can enjoy the benefits of flying around our 3D model of the David without being able to download the data itself.

For more information on the Digital Michelangelo Project, for more images generated from our computer models, and for a list of the many people who contributed to this project, please look at our web site, http://graphics.stanford.edu/projects/mich/.


Figure 3: A computer-generated rendering of a triangle mesh produced by sweeping our laser stripe once across the face of St. Matthew. A cross-sectional plot through the cheek is shown below the image. At this scale, Michelangelo's chisel marks can be seen. Figure 4: A rendering of our full-resolution, merged model of St. Matthew. The original dataset contained 104 scans and 800,000,000 polygons. The model shown here contains 386,488,573 polygons. Figure 5: A rendering with coloring derived from photographs of the statue as described in the text, but the lighting is artificial, having been chosen in the computer. Figure 6: A non-photorealistic, accessibility-shaded coloring of the same mesh. To us, it seems to show the structure of Michelangelo's chisel marks more clearly than figure 3.

Figure 7: Computer renderings from a 2.0 mm, 8-million polygon model of David. The veining and reflectance are artificial. The renderings include physically correct subsurface scattering, but with arbitrary parameters. Figure 8: The image at left depicts the classic three-quarter frontal view. The image at right shows the head in strict profile, a view never seen in art books, because it would require placing the camera with a telephoto lens outside the museum walls. Doesn't he look like a Roman coin? Figure 9: The image at left depicts the David as it might be seen in the Galleria dell'Accademia - lit from above. However, if we artificially light the statue from below, you can easily see that the famous furrowed brow is actually an anatomically impossible protrusion of marble.


© 2003 Marc Levoy