Generating Images of Face Poses for Pose Varying Face Recognition
Shahnas S1, Sreeletha S H2

1Shahnas S, Student, Department of Computer Science, LBSITW, Trivandrum, Kerala, India.
2Sreeletha S H, Assistant Professor, Department of Computer Science, LBSITW, Trivandrum, Kerala, India.

Manuscript received on May 25, 2020. | Revised Manuscript received on June 29, 2020. | Manuscript published on July 30, 2020. | PP: 351-356 | Volume-9 Issue-2, July 2020. | Retrieval Number: F9998038620/2020©BEIESP | DOI: 10.35940/ijrte.F9998.079220
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Deep learning has attracted several researchers in the field of computer vision due to its ability to perform face and object recognition tasks with high accuracy than the traditional shallow learning systems. The convolutional layers present in the deep learning systems help to successfully capture the distinctive features of the face. For biometric authentication, face recognition (FR) has been preferred due to its passive nature. Processing face images are accompanied by a series of complexities, like variation of pose, light, face expression, and make up. Although all aspects are important, the one that impacts the most face-related computer vision applications is pose. In face recognition, it has been long desired to have a method capable of bringing faces to the same pose, usually a frontal view, in order to ease recognition. Synthesizing different views of a face is still a great challenge, mostly because in non-frontal face images there are loss of information when one side of the face occludes the other. Most solutions for FR fail to perform well in cases involving extreme pose variations as in such scenarios, the convolutional layers of the deep models are unable to find discriminative parts of the face for extracting information. Most of the architectures proposed earlier deal with the scenarios where the face images used for training as well as testing the deep learning models are frontal and nearfrontal. On the contrary, here a limited number of face images at different poses is used to train the model, where a number of separate generator models learn to map a single face image at any arbitrary pose to specific poses and the discriminator performs the task of face recognition along with discriminating a synthetic face from a realworld sample. To this end, this paper proposes a representation learning by rotating the face. Here an encoder-decoder structure of the generator enables to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. This representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator.
Keywords: Pose Variation, Face Recognition, Generative Adversarial Network, Adversarial Loss.