Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-28T11:30:35.688Z Has data issue: false hasContentIssue false

Posterior Odds Testing for a Unit Root with Data-Based Model Selection

Published online by Cambridge University Press:  11 February 2009

Peter C.B. Phillips
Affiliation:
Yale University
Werner Ploberger
Affiliation:
Technical University of Vienna

Abstract

The Kalman filter is used to derive updating equations for the Bayesian data density in discrete time linear regression models with stochastic regressors. The implied “Bayes model” has time varying parameters and conditionally heterogeneous error variances. A σ-finite Bayes model measure is given and used to produce a new-model-selection criterion (PIC) and objective posterior odds tests for sharp null hypotheses like the presence of a unit root. This extends earlier work by Phillips and Ploberger [18]. Autoregressive-moving average (ARMA) models are considered, and a general test of trend-stationarity versus difference stationarity is developed in ARMA models that allow for automatic order selection of the stochastic regressors and the degree of the deterministic trend. The tests are completely consistent in that both type I and type II errors tend to zero as the sample size tends to infinity. Simulation results and an empirical application are reported. The simulations show that the PIC works very well and is generally superior to the Schwarz BIC criterion, even in stationary systems. Empirical application of our methods to the Nelson-Plosser [11] series show that three series (unemployment, industrial production, and the money stock) are level- or trend-stationary. The other eleven series are found to be stochastically nonstationary.

Type
Articles
Copyright
Copyright © Cambridge University Press 1994

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

1.Akaike, H.Fitting autoregressive models for prediction. Annals of the Institute of Statistical Mathematics 21 (1969): 243247.CrossRefGoogle Scholar
2.Akaike, H. On entropy maximization principle. In Krishnarah, P.R. (ed.), Applications of Statistics, pp. 2741. Amsterdam: North–Holland, 1977.Google Scholar
3.Brown, R.L., Durbin, J. & Evans, J.M.. Techniques for testing the constancy of regression relationships over time. Journal of the Royal Statistical Society, Series B 37 (1975): 149192.Google Scholar
4.Hannan, E.J.The estimation of the order of an ARMA process. Annals of Statistics 8 (1980): 10711081.CrossRefGoogle Scholar
5.Hannan, E.J.Estimating the dimension of a linear system. Journal of Multivariate Analysis 11 (1981): 459473.Google Scholar
6.Hannan, E.J. & Deistler, M.. The Statistical Theory of Linear Systems. New York: Wiley, 1988.Google Scholar
7.Hannan, E.J. & Rissanen, J.. Recursive estimation of ARMA order. Biometrika 69 (1982): 273280 [Corrigenda. Biometrika 70 (1983)].CrossRefGoogle Scholar
8.Harvey, A.C.Forecasting, Structural Time Series Models, and the Kalman Filter. Cambridge, UK: Cambridge University Press, 1989.Google Scholar
9.Kavalieris, L.A note on estimating autoregressive-moving average order. Biometrika 78 (1991): 920922.Google Scholar
10.Lindley, D.V.A statistical paradox. Biometrika 44 (1957): 187192.Google Scholar
11.Nelson, C.R. & Plosser, C.. Trends and random walks in macroeconomic time series: Some evidence and implications. Journal of Monetary Economics 10 (1982): 139162.CrossRefGoogle Scholar
12.Ouliaris, S. & Phillips, P.C.B.. COINT: Gauss Procedures for Cointegrated Regressions. Predicta Software, Inc., 133 Concord Drive, Madison, CT 06443; tel/fax 203-421-3784.Google Scholar
13.Park, J.Y. & Phillips, P.C.B.. Statistical inference in regressions with integrated processes: Part 1. Econometric Theory 4 (1988): 468497.Google Scholar
14.Park, J.Y. & Phillips, P.C.B.. Statistical inference in regressions with integrated processes: Part 2. Econometric Theory 5 (1989): 95131.CrossRefGoogle Scholar
15.Paulsen, J.Order determination of multivariate autoregressive time series with unit roots. Journal of Time Series Analysis 5 (1984): 115127.Google Scholar
16.Phillips, P.C.B.To criticize the critics: An objective Bayesian analysis of stochastic trends. Journal of Applied Econometrics 6 (1991): 333364.Google Scholar
17.Phillips, P.C.B. Bayesian Model Selection and Prediction with Empirical Illustrations. Cowles Foundation discussion paper 1023, 1992 (to appear in Journal of Econometrics).Google Scholar
18.Phillips, P.C.B. & Ploberger, W.. Time Series Modeling with a Bayesian Frame of Reference: Concepts, Illustrations and Asymptotics. Cowles Foundation discussion paper 980, 1992.Google Scholar
19.Phillips, P.C.B. & Ploberger, W.. Time Series Modeling with a Bayesian Frame of Reference: A General Theory. In preparation.Google Scholar
20.Pötscher, B.M.Model selection under nonstationarity: Autoregressive models and stochastic linear regression models. Annals of Statistics 17 (1989): 12571274.Google Scholar
21.Rissanen, J.Modeling by shortest data description. Automatica 14 (1978): 465471.CrossRefGoogle Scholar
22.Schwarz, G.Estimating the dimension of a model. Annals of Statistics 6 (1978): 461464.Google Scholar
23.Tsay, R.S.Order selection in nonstationary autoregressive models. Annals of Statistics 12 (1984): 14251433.Google Scholar
24.Wei, C.Z.On predictive least squares principles. Annals of Statistics 20 (1992): 142.Google Scholar
25.Zellner, A.An Introduction to Bayesian Inference in Econometrics. New York: Wiley, 1971.Google Scholar