Elsevier

Pattern Recognition

Volume 38, Issue 12, December 2005, Pages 2270-2285
Pattern Recognition

Score normalization in multimodal biometric systems

https://doi.org/10.1016/j.patcog.2005.01.012Get rights and content

Abstract

Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. Although information fusion in a multimodal system can be performed at various levels, integration at the matching score level is the most common approach due to the ease in accessing and combining the scores generated by different matchers. Since the matching scores output by the various modalities are heterogeneous, score normalization is needed to transform these scores into a common domain, prior to combining them. In this paper, we have studied the performance of different normalization techniques and fusion rules in the context of a multimodal biometric system based on the face, fingerprint and hand-geometry traits of a user. Experiments conducted on a database of 100 users indicate that the application of min–max, z-score, and tanh normalization schemes followed by a simple sum of scores fusion method results in better recognition performance compared to other methods. However, experiments also reveal that the min–max and z-score normalization techniques are sensitive to outliers in the data, highlighting the need for a robust and efficient normalization procedure like the tanh normalization. It was also observed that multimodal systems utilizing user-specific weights perform better compared to systems that assign the same set of weights to the multiple biometric traits of all users.

Introduction

Biometric systems make use of the physiological and/or behavioral traits of individuals, for recognition purposes [1]. These traits include fingerprints, hand-geometry, face, voice, iris, retina, gait, signature, palm-print, ear, etc. Biometric systems that use a single trait for recognition (i.e., unimodal biometric systems) are often affected by several practical problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks [2]. Multimodal biometric systems overcome some of these problems by consolidating the evidence obtained from different sources [3]. These sources may be multiple sensors for the same biometric (e.g., optical and solid-state fingerprint sensors), multiple instances of the same biometric (e.g., fingerprints from different fingers of a person), multiple snapshots of the same biometric (e.g., four impressions of a user's right index finger), multiple representations and matching algorithms for the same biometric (e.g., multiple face matchers like PCA and LDA), or multiple biometric traits (e.g., face and fingerprint).

The use of multiple sensors addresses the problem of noisy sensor data, but all other potential problems associated with unimodal biometric systems remain. A recognition system that works on multiple instances of the same biometric can ensure the presence of a live user by asking the user to provide a random subset of biometric measurements (e.g., left index finger followed by right middle finger). Multiple snapshots of the same biometric, or multiple representations and matching algorithms for the same biometric may also be used to improve the recognition performance of the system. However, all these methods still suffer from many of the problems faced by unimodal systems. A multimodal biometric system based on different traits is expected to be more robust to noise, address the problem of non-universality, improve the matching accuracy, and provide reasonable protection against spoof attacks. Hence, the development of biometric systems based on multiple biometric traits has received considerable attention from researchers.

In a multimodal biometric system that uses different biometric traits, various levels of fusion are possible: fusion at the feature extraction level, matching score level or decision level (as explained in Section 2). It is difficult to consolidate information at the feature level because the feature sets used by different biometric modalities may either be inaccessible or incompatible. Fusion at the decision level is too rigid since only a limited amount of information is available at this level. Therefore, integration at the matching score level is generally preferred due to the ease in accessing and combining matching scores.

In the context of verification, fusion at the matching score level can be approached in two distinct ways. In the first approach the fusion is viewed as a classification problem, while in the second approach it is viewed as a combination problem. In the classification approach, a feature vector is constructed using the matching scores output by the individual matchers; this feature vector is then classified into one of two classes: “Accept” (genuine user) or “Reject” (impostor). In the combination approach, the individual matching scores are combined to generate a single scalar score which is then used to make the final decision. Both these approaches have been widely studied in the literature. Ross and Jain [4] have shown that the combination approach performs better than some classification methods like decision tree and linear discriminant analysis. However, no single classification or combination scheme works well under all circumstances. In this paper, we use the combination approach to fusion and address some of the issues involved in computing a single matching score given the scores of different modalities. Since the matching scores generated by the different modalities are heterogeneous, normalization is required to transform these scores into a common domain before combining them. While several normalization techniques have been proposed, there has been no detailed study of these techniques. In this work, we have systematically studied the effects of different normalization schemes on the performance of a multimodal biometric system based on the face, fingerprint and hand-geometry modalities.

The rest of the paper is organized as follows: Section 2 presents a brief overview of the various approaches used for information fusion in multimodal biometrics and motivates the need for score normalization prior to matching score fusion. Section 3 describes different techniques that can be used for the normalization of scores obtained from different matchers. The experimental results are presented in Section 4 and we have outlined our conclusions in Section 5.

Section snippets

Fusion in multimodal biometrics

A biometric system has four important modules. The sensor module acquires the biometric data from a user; the feature extraction module processes the acquired biometric data and extracts a feature set to represent it; the matching module compares the extracted feature set with the stored templates using a classifier or matching algorithm in order to generate matching scores; in the decision module the matching scores are used either to identify an enrolled user or verify a user's identity.

Score normalization

Consider a multimodal biometric verification system that utilizes the combination approach to fusion at the match score level. The theoretical framework developed by Kittler et al. in [18] can be applied to this system only if the output of each modality is of the form P(genuine|Z) i.e., the posteriori probability of user being “genuine” given the input biometric sample Z. In practice, most biometric systems output a matching score s, and Verlinde et al. [23] have proposed that the matching

Experimental results

The multimodal database used in our experiments was constructed by merging two separate databases (of 50 users each) collected using different sensors and over different time periods. The first database (described in [4]) was constructed as follows: Five face images and five fingerprint impressions (of the same finger) were obtained from a set of 50 users. Face images were acquired using a Panasonic CCD camera (640×480) and fingerprint impressions were obtained using a Digital Biometrics sensor

Conclusion and future work

This paper examines the effect of different score normalization techniques on the performance of a multimodal biometric system. We have demonstrated that the normalization of scores prior to combining them improves the recognition performance of a multimodal biometric system that uses the face, fingerprint and hand-geometry traits for user authentication. Min–max, z-score, and tanh normalization techniques followed by a simple sum of scores fusion method result in a superior GAR than all the

About the Author—ANIL JAIN is a University Distinguished Professor in the Departments of Computer Science and Engineering and Electrical and Computer Engineering at Michigan State University. He was the Department Chair between 1995–99. His research interests include statistical pattern recognition, exploratory pattern analysis, texture analysis, document image analysis and biometric authentication. Several of his papers have been reprinted in edited volumes on image processing and pattern

References (34)

  • A. Ross et al.

    Information fusion in biometrics

    Pattern Recogn. Lett.

    (2003)
  • L. Lam et al.

    Optimal combination of pattern classifiers

    Pattern Recogn. Lett.

    (1995)
  • S. Prabhakar et al.

    Decision-level fusion in fingerprint verification

    Pattern Recogn.

    (2002)
  • A.K. Jain et al.

    An introduction to biometric recognition

    IEEE Trans. Circuits Systems Video Technol.

    (2004)
  • A.K. Jain et al.

    Multibiometric systems

    Commun. ACM

    (2004)
  • L. Hong et al.

    Can multibiometrics improve performance?

  • C. Sanderson, K.K. Paliwal, Information fusion and person verification using speech and face information, Research...
  • S.S. Iyengar et al.

    Advances in Distributed Sensor Technology

    (1995)
  • R.O. Duda et al.

    Pattern Classification

    (2001)
  • K. Woods et al.

    Combination of multiple classifiers using local accuracy estimates

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1997)
  • K. Chen et al.

    Methods of combining multiple classifiers with different features and their applications to text-independent speaker identification

    Int. J. Pattern Recogn. Artif. Intell.

    (1997)
  • L. Lam et al.

    Application of majority voting to pattern recognitionan analysis of its behavior and performance

    IEEE Trans. Systems Man Cybernet. Part ASystems Humans

    (1997)
  • L. Xu et al.

    Methods for combining multiple classifiers and their applications to handwriting recognition

    IEEE Trans. Systems Man Cybernet.

    (1992)
  • J. Daugman, Combining Multiple Biometrics, Available at...
  • T.K. Ho et al.

    Decision combination in multiple classifier systems

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1994)
  • Y. Wang et al.

    Combining face and iris biometrics for identity verification

  • P. Verlinde et al.

    Comparing decision fusion paradigms using k-NN based classifiers, decision trees and logistic regression in a multi-modal identity verification application

  • Cited by (1752)

    View all citing articles on Scopus

    About the Author—ANIL JAIN is a University Distinguished Professor in the Departments of Computer Science and Engineering and Electrical and Computer Engineering at Michigan State University. He was the Department Chair between 1995–99. His research interests include statistical pattern recognition, exploratory pattern analysis, texture analysis, document image analysis and biometric authentication. Several of his papers have been reprinted in edited volumes on image processing and pattern recognition. He received the best paper awards in 1987 and 1991, and received certificates for outstanding contributions in 1976, 1979, 1992, 1997 and 1998 from the Pattern Recognition Society. He also received the 1996 IEEE Transactions on Neural Networks Outstanding Paper Award. He was the Editor-in-Chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence between 1991–1994. He is a fellow of the IEEE, ACM, and International Association of Pattern Recognition (IAPR). He has received a Fulbright Research Award, a Guggenheim fellowship and the Alexander von Humboldt Research Award. He delivered the 2002 Pierre Devijver lecture sponsored by the International Association of Pattern Recognition (IAPR). He holds six patents in the area of fingerprint matching.

    About the Author—KARTHIK NANDAKUMAR received his B.E. degree in Electronics and Communication Engineering from Anna University, Chennai, India in 2002. He is now a Ph.D. student in the Department of Computer Science and Engineering, Michigan State University. His research interests include statistical pattern recognition, biometric authentication, computer vision and machine learning.

    About the Author—ARUN ROSS is an Assistant Professor in the Lane Department of Computer Science and Electrical Engineering at West Virginia University. Ross received his B.E. (Hons.) degree in Computer Science from the Birla Institute of Technology and Science, Pilani (India), in 1996. He obtained his M.S. and Ph.D. degrees in Computer Science and Engineering from Michigan State University in 1999 and 2003, respectively. Between July 1996 and December 1997, he worked with the Design and Development group of Tata Elxsi (India) Ltd., Bangalore. He also spent three summers (2000–2002) with the Imaging and Visualization group at Siemens Corporate Research, Inc., Princeton, working on fingerprint recognition algorithms. His research interests include statistical pattern recognition, image processing, computer vision and biometrics.

    This research was supported by the Center for Identification Technology Research (CITeR), a NSF/IUCRC program.

    View full text