Long-run forecasts of economic variables play an important role in policy, planning, and portfolio decisions. We consider long-horizon forecasts of average growth of a scalar variable, assuming that first differences are second-order stationary. The main contribution is the construction of predictive sets with asymptotic coverage over a wide range of data generating processes, allowing for stochastically trending mean growth, slow mean reversion and other types of long-run dependencies. We illustrate the method by computing predictive sets for 10 to 75 year average growth rates of U.S. real per-capita GDP, consumption, productivity, price level, stock prices and population.HAC Corrections for Strongly Autocorrelated Time Series.
Applied work routinely relies on heteroskedasticity and autorcorrelation consistent (HAC) standard errors when conducting inference in a time series setting. As is well known, however, these corrections perform poorly in small samples under pronounced autocorrelations. In this paper, I first provide a review of popular methods to clarify the reasons for this failure. I then derive inference that remains valid under a specific form of strong dependence. In particular, I assume that the long-run properties can be approximated by a stationary Gaussian AR(1) model, with coefficient arbitrarily close to one. In this setting, I derive tests that come close to maximizing a weighted average power criterion. Small sample simulations show these tests to perform well, also in a regression context.Credibility of Confidence Sets in Nonstandard Econometric Problems. (Joint with ANDRIY NORETS.)
Frequentist confidence intervals are the most common description of parameter uncertainty in econometrics. However, in nonstandard problems, they are not guaranteed to have reasonable properties in that regard. For instance, confidence sets may be empty with positive probability, even if they are based on inverting powerful tests, or if they are chosen to minimize (averaged) expected length. We apply a betting framework to formalize the "reasonableness" of confidence intervals as descriptions of parameter uncertainty, and use it for two different purposes. On the one hand, we quantify the reasonableness of a given confidence interval in a nonstandard problem. On the other hand, we construct confidence sets that satisfy betting and other attractive criteria. We apply our framework to inference about a parameter near a boundary; the value of the autoregressive root close to unity; the magnitude and date of a break in a time-series model; regression with a weak instrument, and two moment inequality models. We find most previously suggested confidence intervals to be far from reasonable, and numerically determine alternative confidence sets that satisfy our criteria.Nearly Optimal Tests when a Nuisance Parameter is Present Under the Null Hypothesis. (Joint with GRAHAM ELLIOTT and MARK WATSON.)
This paper considers nonstandard hypothesis testing problems that involve a nuisance parameter. We establish a bound on the weighted average power of all valid tests, and develop a numerical algorithm that determines a feasible test with power close to the bound. The approach is illustrated in six applications: inference about a linear regression coefficient when the sign of a control coefficient is known; small sample inference about the difference in means from two independent Gaussian samples from populations with potentially different variances; inference about the break date in structural break models with moderate break magnitude; predictability tests when the regressor is highly persistent; inference about an interval identified parameter; and inference about a linear regression coefficient when the necessity of a control is in doubt.Forecasts in a Slightly Misspecified Finite Order VAR. (Joint with JAMES STOCK.)
We propose a Bayesian procedure for exploiting small, possibly long-lag linear predictability in the innovations of a finite order autoregression. We model the innovations as having a log-spectral density that is a continuous mean-zero Gaussian process of order 1/sqrt(T). This local embedding makes the problem asymptotically a normal-normal Bayes problem, resulting in closed-form solutions for the best forecast. When applied to data on 132 U.S. monthly macroeconomic time series, the method is found to improve upon autoregressive forecasts by an amount consistent with the theoretical and Monte Carlo calculations.Pre and Post Break Parameter Inference. (Joint with GRAHAM ELLIOTT.)
This paper provides a method for conducting inference about the pre and post break value of a scalar parameter in GMM time series models with a single break at an unknown date. We show that treating the break date estimated by least squares as the true break date leads to substantially oversized tests and confidence intervals unless the break is large. We develop an alternative test that controls size uniformly and that is approximately efficient in a well defined sense.Forthcoming and Published Papers
Risk of Bayesian Inference in Misspecified Models, and the Sandwich Covariance Matrix. Accepted for publication in Econometrica.
Low-Frequency Robust Cointegration Testing, Journal of Econometrics 174 (2013), 66 – 81. (Joint with MARK WATSON.)
Measuring Prior Sensitivity and Prior Informativeness in Large Bayesian Models, Journal of Monetary Economics 59 (2012), 581 – 597.
Efficient Tests under a Weak Convergence Assumption, Econometrica 79 (2011), 395 – 435. (Formerly circulated under the title "An Alternative Sense of Asymptotic Efficiency".)
Estimation of the Parameter Path in Unstable Time Series Models, Review
Economic Studies 77 (2010), 1508 – 1539. Supplement. Correction. (Joint with PHILIPPE-EMMANUEL
t-statistic Based Correlation and Heterogeneity Robust Inference, Journal of Business & Economic Statistics 28 (2010), 453 – 468. Supplement. (Joint with RUSTAM IBRAGIMOV.)
Valid Inference in Partially Unstable GMM Models, Review of Economic Studies 76 (2009), 343 – 365. (Joint with HONG LI.)
Comment on "Unit Root Testing in Practice: Dealing with Uncertainty over the Trend and Initial Condition" by D. I. Harvey, S. J. Leybourne and A. M. R. Taylor, Econometric Theory 25 (2009), 643 – 648.
Testing Models of Low-Frequency Variability, Econometrica 76 (2008), 979 – 1016. (Joint with MARK WATSON.)
The Impossibility of Consistent Discrimination between I(0) and I(1) Processes, Econometric Theory 24 (2008), 616 – 630.
A Theory of Robust Long-Run Variance Estimation, Journal of Econometrics 141 (2007), 1331 – 1352. (Substantially different 2004 working paper).
Confidence Sets for the Date of a Single Break in Linear Time Series Regressions, Journal of Econometrics 141 (2007), 1196 – 1218. (Joint with GRAHAM ELLIOTT.)Minimizing the Impact of the Initial Condition on Testing for Unit Roots, Journal of Econometrics 135 (2006), 285 – 310. (Joint with GRAHAM ELLIOTT.)
Efficient Tests for General Persistent Time Variation in Regression Coefficients, Review of Economic Studies 73 (2006), 907 – 940. Formerly circulated under the title “Optimally Testing General Breaking Processes in Linear Time Series Models”. (Joint with GRAHAM ELLIOTT.)
Are Forecasters Reluctant to Revise their Predictions? Some German Evidence, Journal of Forecasting 25 (2006), 401 – 413. (Joint with GEBHARD KIRCHGÄSSNER.)
Size and Power of Tests for Stationarity in Highly Autocorrelated Time Series, Journal of Econometrics 128 (2005), 195 – 213.
Tests for Unit Roots and the Initial Condition, Econometrica 71 (2003), 1269 – 1286. (Joint with GRAHAM ELLIOTT.)Ecological Tax Reform and Involuntary Unemployment: Simulation Results for Switzerland, Schweizerische Zeitschrift für Volkswirtschaft und Statistik 134 (1998), 329 – 359. (Joint with GEBHARD KIRCHGÄSSNER and MARCEL SAVIOZ.)