Skip to main content
Log in

Testing statistical hypotheses based on the density power divergence

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

The family of density power divergences is an useful class which generates robust parameter estimates with high efficiency. None of these divergences require any non-parametric density estimate to carry out the inference procedure. However, these divergences have so far not been used effectively in robust testing of hypotheses. In this paper, we develop tests of hypotheses based on this family of divergences. The asymptotic variances of the estimators are generally different from the inverse of the Fisher information matrix, so that the usual drop-in-divergence type statistics do not lead to standard Chi-square limits. It is shown that the alternative test statistics proposed herein have asymptotic limits which are described by linear combinations of Chi-square statistics. Extensive simulation results are presented to substantiate the theory developed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

References

  • Basu A., Harris I. R., Hjort N. L., Jones M. C. (1998) Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85: 549–559

    Article  MathSciNet  MATH  Google Scholar 

  • Basu, A., Shioya, H., Park, C. (2011). Statistical inference: the minimum distance approach. Boca Raton: Chapman Hall/CRC.

  • Csiszár I. (1991) Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. The Annal of Statistics 19: 2032–2066

    Article  MATH  Google Scholar 

  • De Angelis D., Young G. (1992) Smoothing the bootstrap. International Statistical Review 60: 45–56

    Article  MATH  Google Scholar 

  • Dik J. J., de GunstM. C. M. (1985) The distribution of general quadratic forms in normal variables. Statistica Neerlandica 39: 14–26

    Article  MathSciNet  Google Scholar 

  • Eckler A. R. (1969) A survey of coverage problems associated with point and area targets. Technometrics 11: 561–589

    Article  Google Scholar 

  • Fraser D. A. S. (1956) Sufficient statistics with nuisance parameters. The Annal of Mathematical Statistics 27: 838–842

    Article  Google Scholar 

  • Gupta S. S. (1963) Bibliography on the multivariate normal integrals and related topics. The Annal of Mathematical Statistics 34: 829–838

    Article  Google Scholar 

  • Harville D. A. (2008) Matrix Algebra from a statistician’s perspective. Springer, New York

    Google Scholar 

  • Heritier S., Ronchetti E. (1994) Robust Bounded-influence Tests in General Parametric Models. Journal of the American Statistical Association 89: 897–904

    Article  MathSciNet  MATH  Google Scholar 

  • Jensen D. R., Solomon H. (1972) A Gaussian approximation to the distribution of a definite quadratic form. Journal of the American Statistical Association 67(340): 898–902

    Google Scholar 

  • Johnson N. L., Kotz S. (1968) Tables of distributions of positive definite quadratic forms in central normal variables. Sankhya B—The Indian Journal of Statistics 30: 303–314

    Google Scholar 

  • Jones M. C., Hjort N. L., Harris I. R., Basu A. (2001) A comparison of related density-based minimum divergence methods. Biometrika, 88: 865–873

    Article  MathSciNet  Google Scholar 

  • Lindsay B. G. (1994) Efficiency versus robustness: the case for minimum Hellinger distance and related methods. The Annals of Statistics 22: 1081–1114

    Article  MathSciNet  Google Scholar 

  • Modarres R., Jernigan R. W. (1992) Testing the equality of correlation matrices. Communications in Statistics—Theory and Methods 21: 2107–2125

    Article  MathSciNet  MATH  Google Scholar 

  • Pardo L. (2006) Statistical inference based on divergence measures. Chapman Hall/CRC, Boca Raton

    MATH  Google Scholar 

  • Rao J. N. K., Scott A. J. (1981) The analysis of categorical data from complex sample surveys: Chi-squared tests for goodness of fit and independence in two-way tables. Journal of the American Statistical Association 76: 221–230

    Article  MathSciNet  Google Scholar 

  • Satterthwaite F. E. (1946) An approximate distribution of estimates of variance components. Biometrics 2: 110–114

    Article  Google Scholar 

  • Simpson D. G. (1989) Hellinger deviance tests: Efficiency, breakdown points and examples. Journal of the American Statistical Association 84: 107–113

    Article  MathSciNet  Google Scholar 

  • Solomon, H. (1960). Distribution of quadratic forms: tables and applications. Applied Mathematics and Statistics Laboratories, Technical Report 45, Stanford University, Stanford.

  • Welch W. J. (1987) Rerandomizing the median in matched-pair designs. Biometrika 74: 609–614

    Article  MathSciNet  Google Scholar 

  • Woodruff R. C., Mason J. M., Valencia R., Zimmering S. (1984) Chemical mutagenesis testing in Drosophila: I. Comparison of positive and negative control data for sex-linked recessive lethal mutations and reciprocal translocations in three laboratories. Environmental mutagenesis 6: 189–202

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Martin.

About this article

Cite this article

Basu, A., Mandal, A., Martin, N. et al. Testing statistical hypotheses based on the density power divergence. Ann Inst Stat Math 65, 319–348 (2013). https://doi.org/10.1007/s10463-012-0372-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-012-0372-y

Keywords

Navigation