Skip to Main Content
In this paper, we present a neural-network learning scheme for face reconstruction. This scheme, which we called the smooth projected polygon representation neural network (SPPRNN), is able to successively refine the polygon's vertices parameter of an initial 3D shape based on depth-maps of several calibrated images taken from multiple views. The depth-maps, which are obtained by deploying the Tsai-Shah shape-from-shading (SFS) algorithm, can be considered as partial 3D shapes of the face to be reconstructed. The reconstruction is finalized by mapping the texture of face images to the initial 3D shape. There are three interesting issues investigated in this paper concerning the effectiveness of this scheme. First, how effective the SFS provides partial 3D shapes compared to if we simply used 2D images. Secondly, how essential a smooth projected polygonal model is in order to approximate the face structure and enhance the convergence rate of this scheme. Thirdly, how an appropriate initial 3D shape should be selected and used in order to improve model resolution and learning stability. By carefully addressing these three issues, it was shown from our experiment that a compact and realistic 3D model of a human (mannequin) face could be obtained.