Consider inference with a small number of potentially heterogeneous clusters. Suppose estimating the model on each cluster yields q asymptotically unbiased, independent Gaussian estimators with potentially heterogeneous variances. Following Ibragimov and Müller (2010), one can then conduct asymptotically valid inference with a standard t-test based on the q cluster estimators, since at conventional significance levels, the small sample t-test remains valid under variance heterogeneity. This note makes two contributions. First, we establish the new corresponding small sample result for the two-sample t-test under variance heterogeneity. One can therefore apply t-statistic based inference also for comparisons of parameters between two populations, such as treatment and control groups, or pre- and post-structural break data. Second, we develop a test for the appropriate level of clustering, with the null hypothesis that clustered standard errors from a fine partition are correct, against the alternative that only q clusters provide asymptotically independent information.Measuring Uncertainty about Long-Run Predictions. (Joint with MARK WATSON.)
Long-run forecasts of economic variables play an important role in policy, planning, and portfolio decisions. We consider long-horizon forecasts of average growth of a scalar variable, assuming that first differences are second-order stationary. The main contribution is the construction of predictive sets with asymptotic coverage over a wide range of data generating processes, allowing for stochastically trending mean growth, slow mean reversion and other types of long-run dependencies. We illustrate the method by computing predictive sets for 10 to 75 year average growth rates of U.S. real per-capita GDP, consumption, productivity, price level, stock prices and population.HAC Corrections for Strongly Autocorrelated Time Series.
Applied work routinely relies on heteroskedasticity and autorcorrelation consistent (HAC) standard errors when conducting inference in a time series setting. As is well known, however, these corrections perform poorly in small samples under pronounced autocorrelations. In this paper, I first provide a review of popular methods to clarify the reasons for this failure. I then derive inference that remains valid under a specific form of strong dependence. In particular, I assume that the long-run properties can be approximated by a stationary Gaussian AR(1) model, with coefficient arbitrarily close to one. In this setting, I derive tests that come close to maximizing a weighted average power criterion. Small sample simulations show these tests to perform well, also in a regression context.Credibility of Confidence Sets in Nonstandard Econometric Problems. (Joint with ANDRIY NORETS.) 2012 Working Paper.
Confidence intervals are commonly used to describe parameter uncertainty. In nonstandard problems, however, their frequentist coverage property does not guarantee that they do so in a reasonable fashion. For instance, confidence intervals may be empty or extremely short with positive probability, even if they are based on inverting powerful tests. We apply a betting framework to formalize the "reasonableness" of confidence intervals as descriptions of parameter uncertainty, and use it for two purposes. First, we quantify the degree of unreasonableness of previously suggested confidence intervals in nonstandard problems. Second, we derive alternative confidence sets that are reasonable by construction. We apply our framework to inference about a parameter near a boundary and a local-to-unity autoregressive root. We find that previously suggested confidence intervals are not reasonable, and numerically determine alternative confidence sets that satisfy our criteria.Nearly Optimal Tests when a Nuisance Parameter is Present Under the Null Hypothesis. (Joint with GRAHAM ELLIOTT and MARK WATSON.)
This paper considers nonstandard hypothesis testing problems that involve a nuisance parameter. We establish a bound on the weighted average power of all valid tests, and develop a numerical algorithm that determines a feasible test with power close to the bound. The approach is illustrated in six applications: inference about a linear regression coefficient when the sign of a control coefficient is known; small sample inference about the difference in means from two independent Gaussian samples from populations with potentially different variances; inference about the break date in structural break models with moderate break magnitude; predictability tests when the regressor is highly persistent; inference about an interval identified parameter; and inference about a linear regression coefficient when the necessity of a control is in doubt.Forecasts in a Slightly Misspecified Finite Order VAR. (Joint with JAMES STOCK.)
We propose a Bayesian procedure for exploiting small, possibly long-lag linear predictability in the innovations of a finite order autoregression. We model the innovations as having a log-spectral density that is a continuous mean-zero Gaussian process of order 1/sqrt(T). This local embedding makes the problem asymptotically a normal-normal Bayes problem, resulting in closed-form solutions for the best forecast. When applied to data on 132 U.S. monthly macroeconomic time series, the method is found to improve upon autoregressive forecasts by an amount consistent with the theoretical and Monte Carlo calculations.Pre and Post Break Parameter Inference. (Joint with GRAHAM ELLIOTT.)
Consider inference about the pre and post break value of a scalar parameter in a time series model with a single break at an unknown date. Unless the break is large, treating the break date estimated by least squares as the true break date leads to substantially oversized tests and confidence intervals. To develop a suitable alternative, we first establish convergence to a Gaussian process limit experiment. We then determine a nearly weighted average power maximizing test in this limit experiment, and show how to implement a small sample analogue in GMM time series models.Forthcoming and Published Papers
Risk of Bayesian Inference in Misspecified Models, and the Sandwich Covariance Matrix. Accepted for publication in Econometrica.
Low-Frequency Robust Cointegration Testing, Journal of Econometrics 174 (2013), 66 – 81. (Joint with MARK WATSON.)
Measuring Prior Sensitivity and Prior Informativeness in Large Bayesian Models, Journal of Monetary Economics 59 (2012), 581 – 597.
Efficient Tests under a Weak Convergence Assumption, Econometrica 79 (2011), 395 – 435. (Formerly circulated under the title "An Alternative Sense of Asymptotic Efficiency".)
Estimation of the Parameter Path in Unstable Time Series Models, Review
Economic Studies 77 (2010), 1508 – 1539. Supplement. Correction. (Joint with PHILIPPE-EMMANUEL
t-statistic Based Correlation and Heterogeneity Robust Inference, Journal of Business & Economic Statistics 28 (2010), 453 – 468. Supplement. (Joint with RUSTAM IBRAGIMOV.)
Valid Inference in Partially Unstable GMM Models, Review of Economic Studies 76 (2009), 343 – 365. (Joint with HONG LI.)
Comment on "Unit Root Testing in Practice: Dealing with Uncertainty over the Trend and Initial Condition" by D. I. Harvey, S. J. Leybourne and A. M. R. Taylor, Econometric Theory 25 (2009), 643 – 648.
Testing Models of Low-Frequency Variability, Econometrica 76 (2008), 979 – 1016. (Joint with MARK WATSON.)
The Impossibility of Consistent Discrimination between I(0) and I(1) Processes, Econometric Theory 24 (2008), 616 – 630.
A Theory of Robust Long-Run Variance Estimation, Journal of Econometrics 141 (2007), 1331 – 1352. (Substantially different 2004 working paper).
Confidence Sets for the Date of a Single Break in Linear Time Series Regressions, Journal of Econometrics 141 (2007), 1196 – 1218. (Joint with GRAHAM ELLIOTT.)Minimizing the Impact of the Initial Condition on Testing for Unit Roots, Journal of Econometrics 135 (2006), 285 – 310. (Joint with GRAHAM ELLIOTT.)
Efficient Tests for General Persistent Time Variation in Regression Coefficients, Review of Economic Studies 73 (2006), 907 – 940. Formerly circulated under the title “Optimally Testing General Breaking Processes in Linear Time Series Models”. (Joint with GRAHAM ELLIOTT.)
Are Forecasters Reluctant to Revise their Predictions? Some German Evidence, Journal of Forecasting 25 (2006), 401 – 413. (Joint with GEBHARD KIRCHGÄSSNER.)
Size and Power of Tests for Stationarity in Highly Autocorrelated Time Series, Journal of Econometrics 128 (2005), 195 – 213.
Tests for Unit Roots and the Initial Condition, Econometrica 71 (2003), 1269 – 1286. (Joint with GRAHAM ELLIOTT.)Ecological Tax Reform and Involuntary Unemployment: Simulation Results for Switzerland, Schweizerische Zeitschrift für Volkswirtschaft und Statistik 134 (1998), 329 – 359. (Joint with GEBHARD KIRCHGÄSSNER and MARCEL SAVIOZ.)