File spoon-archives/deleuze-guattari.archive/deleuze-guattari_2001/deleuze-guattari.0110, message 129


From: "venusia" <selonit-AT-moon.co.jp>
Date: Mon, 15 Oct 2001 20:49:46 +0200
Subject: ---- ---- ----/


Realistic Human Face Scanning and Rendering
someone sent me this, i don't who. it's not funny.

ntroduction

ne of the "holy grails" of computer graphics is being able to reate human faces that look and act realistically. There are any levels of attaining this goal. A digital photograph, for xample, is a computer image of a human face that looks real. 

or faces that act realistically, you can't beat a video camera.

 These acquisition methods, however, are lacking for two reasons: they only record data in two dimensions, and they are not controllable. 


If you make a video recording of 
someone standing still 
reciting the alphabet, 

/------------->


you can't then play it back and make her sing "The Star Spangled Banner" while spinning in circles. Not as obvious, but as important for applications such as computer-based special effects in movies, there is no way to change the lighting after a photograph has been taken or a scene has been filmed. This means, for example, that it is difficult to take a photograph of a person standing in the desert on a sunny day and make it look like they are standing indoors in a Cathedral in front of a stained glass window. Doing so requires painstaking manual touchup by skilled artists, and it is still almost impossible to make it look real. 

One of the focuses of our work in the ICT Graphics Lab is to be able to acquire 3D models of humans that can be animated, relit, edited, and otherwise modified. Our "killer app" would be a device in which a person sit, and after a brief "scan," yields a computer graphics model of their face that we could stick onto a video game character's head, or composite into a movie frame for special effects shot. These computer-generated images would be so realistic that they would be indistinguishable from photographs. This level of realism is called Photo-realism. 

Our prototype device for digitizing a human face is a rig called a lightstage. Scanning a person in the lightstage is a two-phase process. In the first phase, which lasts a few seconds, the lightstage photographs a human subject's face lit from hundreds of different directions. This recovers the skin's reflectance properties (how skin reflects light, its color, shininess, etc.) The second phase uses a technique called active vision to recover a 3D geometric model of the face (the shape). These two components, reflectance properties and geometry, should be enough to reconstruct the appearance of a person with any light bouncing off of them from any point of view. 


Geometric Properties
Active vision involves projecting patterns with a projector onto a subject. The face's appearance under each pattern is recorded by a separate video camera. By analyzing the images of the face with different patterns on it, active vision techniques obtain a 3D model of the subject. One of the problems with using active vision technology to recover high-resolution geometry is that the geometry tends to be very noisy. This noise is the geometric equivalent of getting bad reception on your TV. It introduces sharp random features like small ridges and valleys that are not present on the actual face. This is the case with the image shown below on the left (A). We would like techniques that allow us to recover higher quality geometric information, without all of this noise.

One approach is to use the reflectance properties we recover in order to improve the geometric model. Using the reflectance data recorded by the lightstage and a technique called photometric stereo, we are able to obtain an accurate estimate of the direction that the surface of the skin is oriented at each point on the face. This data is called a surface normal map or orientation map. We have developed a technique, called differential displacement map recovery, to incorporate this normal map with the noisy range scan to obtain very accurate geometric models of the face. It is efficient; typical processing time for a high-res scan is 1-2 minutes, and it gives good results. Examples are shown below in figures A -D. A forthcoming publication will fully describe this technique.

-----------------------------------------------
Japan is opening. Register .JP today! http://Domaintorou.com/e/

   

Driftline Main Page

 

Display software: ArchTracker © Malgosia Askanas, 2000-2005