Abstract
In the previous chapter, we learned about descriptive statistics, such as means and standard deviations, and the insights that can be gained from such measures. Often, we use these measures to compare groups. For example, we might be interested in investigating whether men or women spend more money on the Internet. Assume that the mean amount that a sample of men spends online is 200 USD per year against the mean of 250 USD for the women sample. Two means drawn from different samples are almost always different (in a mathematical sense), but are these differences also statistically significant?
Learning Objectives
After reading this chapter you should understand:
– The logic of hypothesis testing.
– The steps involved in hypothesis testing.
– What test statistics are.
– Types of error in hypothesis testing.
– Common types of s, one-way and two-way ANOVA.
– How to interpret SPSS outputs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that the power of a statistical test may depend on a number of factors which may be particular to a specific testing situation. However, power nearly always depends on (1) the chosen significance level, and (2) the magnitude of the effect of interest in the population.
- 2.
To obtain the critical value, you can also use the TINV function provided in Microsoft Excel, whose general form is “TINV(α, df).” Here, α represents the desired Type I error rate and df the degrees of freedom. To carry out this computation, open a new Excel spreadsheet and type in “=TINV(2*0.05,9).” Note that we have to specify “2*0.05” (or, directly 0.1) under α as we are applying a one-tailed instead of a two-tailed test.
- 3.
Unfortunately, there is quite some confusion about the difference between α and p-value. See Hubbard and Bayarri (2003) for a discussion.
- 4.
We don’t have to conduct manual calculations and tables when working with SPSS. However, we can easily compute the p-value ourselves using the TDIST function in Microsoft Excel. The function has the general form “TDIST(t, df, tails)”, where t describes the test value, df the degrees of freedom and tails specifies whether it’s a one-tailed test (tails = 1) or two-tailed test (tails = 2). For our example, just open a new spreadsheet and type in “=TDIST(2.274,9,1)”. Likewise, there are several webpages with Java-based modules (e.g., http://www.graphpad.com/quickcalcs/index.cfm) that calculate p-values and test statistic values.
- 5.
There may be a situation in which we know the population standard deviation beforehand, for example, from a previous study. From a strict statistical viewpoint, it would be appropriate to use a z-test in this case, but both tests yield results that only differ marginally.
- 6.
The number of pairwise comparisons is calculated as follows: k·(k − 1)/2, with k the number of groups to compare.
- 7.
Field (2009) provides a detailed introduction to further ANOVA types such as multiple ANOVA (MANOVA) or an analysis of covariance (ANCOVA).
- 8.
Note that you can also apply ANOVA when comparing two groups. However, in this case, you should rather revert to the two independent samples t-test.
- 9.
Nonparametric alternatives to ANOVA are, for example, the χ²-test of independence (for nominal variables) and the Kruskal–Wallis test (for ordinal variables). See, for example, Field (2009).
- 10.
In fact, these two assumptions are interrelated, since unequal group sample sizes result in a greater probability that we will violate the homogeneity assumption.
- 11.
SS is an abbreviation of “sum of squares” because the variation is calculated by means of squared differences between different types of values.
- 12.
Note that the application of post hoc tests only makes sense when the overall F-test finds a significant effect.
- 13.
Note that the data are artificial.
- 14.
Contrary to this, the Welch test suggests that there are no differences between the three groups (p-value 0.081 > 0.05). This divergent result underlines the importance of carefully considering the result of the Levene’s test.
- 15.
Note that the data are artificial.
References
Boneau CA (1960) The effects of violations of assumptions underlying the t test. Psychol Bull 57(1):49–64
Brown MB, Forsythe AB (1974) Robust tests for the equality of variances. J Am Stat Assoc 69(346):364–367
Cohen J (1992) A power primer. Psychol Bull 112(1):155–159
Field A (2009) Discovering statistics using SPSS, 3rd edn. Sage, London
Hubbard R, Bayarri MJ (2003)[AU4] Confusion over measure of evidence (p’s) versus errors (α’s) in classical statistical testing. Am Stat 57(3):171–178
Lilliefors HW (1967) On the Kolmogorov–Smirnov test for normality with mean and variance unknown. J Am Stat Assoc 62(318):399–402
Schwaiger M, Sarstedt M, Taylor CR (2010) Art for the sake of the corporation: Audi, BMW Group, DaimlerChrysler, Montblanc, Siemens, and Volkswagen Help Explore the Effect of Sponsorship on Corporate Reputations. Journal of Advertising Research 50(1):77–90
Welch BL (1951) On the comparison of several mean values: an alternative approach. Biometrika 38(3/4):330–336
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Berlin Heidelberg
About this chapter
Cite this chapter
Mooi, E., Sarstedt, M. (2010). Hypothesis Testing & ANOVA. In: A Concise Guide to Market Research. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12541-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-12541-6_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12540-9
Online ISBN: 978-3-642-12541-6
eBook Packages: Business and EconomicsBusiness and Management (R0)