Paper
1 September 1990 Image analysis for face modeling and facial image reconstruction
Hiroshi Agawa, Gang Xu, Yoshio Nagashima, Fumio Kishino
Author Affiliations +
Proceedings Volume 1360, Visual Communications and Image Processing '90: Fifth in a Series; (1990) https://doi.org/10.1117/12.24135
Event: Visual Communications and Image Processing '90, 1990, Lausanne, Switzerland
Abstract
We have studied a stereo-based approach to three-dimensional face modeling and facial image reconstruction virtually viewed from different angles. This paper describes the system, especially image analysis and facial shape feature extraction techniques using information about color and position of face and face components, and image histogram and line segment analysis. Using these techniques, the system can get the facial features precisely, automatically and independent of facial image size and face tilting. In our system, input images viewed from the front and side of the face are processed as follows: the input images axe first transformed into a set of color pictures with significant features. Regions are segmented by thresholding or slicing after analyzing the histograms of the pictures. Using knowledge about color and positions of the face, face and hair regions are obtained and facial boundaries extracted. Feature points along the obtained profile are extracted using information about curvature amplitude and sign, and knowledge about distance between the feature points. In the facial areas which include facial components, regions are again segmented by the same techniques with color information from each face component. The component regions are recognized using knowledge of facial component position. In each region, the pictures are filtered with various differential operators, which are selected according to each picture and region. Thinned images are obtained from the filtered images by various image processing and line segment analysis techniques. Then, feature points of the front and side views are extracted. Finally, the size and position differences and facial tilting between two input images are compensated for by matching the common feature points in the two views. Thus, the three-dimensional data of the feature points and the boundaries of the face are acquired. The two base face models, representing a typical Japanese man and woman, are prepared and the model of the same sex is modified with 3D data from the extracted feature points and boundaries in a linear manner. The images, which are virtually viewed from different angles, are reconstructed by mapping facial texture to the modified model.
© (1990) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hiroshi Agawa, Gang Xu, Yoshio Nagashima, and Fumio Kishino "Image analysis for face modeling and facial image reconstruction", Proc. SPIE 1360, Visual Communications and Image Processing '90: Fifth in a Series, (1 September 1990); https://doi.org/10.1117/12.24135
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Feature extraction

3D modeling

Data modeling

Image processing

3D image processing

Image analysis

Back to Top