We propose a method for constructing confidence intervals that account for many forms of spatial correlation. The interval has the familiar `estimator plus and minus a standard error times a critical value' form, but we propose new methods for constructing the standard error and the critical value. The standard error is constructed using population principal components from a given `worst-case' spatial covariance model. The critical value is chosen to ensure coverage in a benchmark parametric model for the spatial correlations. The method is shown to control coverage in large samples whenever the spatial correlation is weak, i.e., with average pairwise correlations that vanish as the sample size gets large. We also provide results on correct coverage in a restricted but nonparametric class of strong spatial correlations, as well as on the efficiency of the method. In a design calibrated to match economic activity in U.S. states the method outperforms previous suggestions for spatially robust inference about the population mean.
Standard inference about a scalar parameter estimated via GMM amounts to applying a t-test to a particular set of observations. If the number of observations is not very large, then moderately heavy tails can lead to poor behavior of the t-test. This is a particular problem under clustering, since the number of observations then corresponds to the number of clusters, and heterogeneity in cluster sizes induces a form of heavy tails. This paper combines extreme value theory for the smallest and largest observations with a normal approximation for the average of the remaining observations to construct a more robust alternative to the t-test. The new test is found to control size much more successfully in small samples compared to existing methods. Analytical results in the canonical inference for the mean problem demonstrate that the new test provides a refinement over the full sample t-test under more than two but less than three moments, while the bootstrapped t-test does not.
We propose a Bayesian procedure for exploiting small, possibly long-lag linear predictability in the innovations of a finite order autoregression. We model the innovations as having a log-spectral density that is a continuous mean-zero Gaussian process of order 1/sqrt(T). This local embedding makes the problem asymptotically a normal-normal Bayes problem, resulting in closed-form solutions for the best forecast. When applied to data on 132 U.S. monthly macroeconomic time series, the method is found to improve upon autoregressive forecasts by an amount consistent with the theoretical and Monte Carlo calculations.
Forthcoming and Published Papers Refining the Central Limit Theorem Approximation via Extreme Value Theory, Statistics & Probability Letters 155 (2019), 1 – 7.
Nearly Weighted Risk Minimal Unbiased Estimation, Journal of Econometrics, 209 (2019), 18 – 34. (Joint with YULONG
WANG.) Replication files. Slides.
Long-Run Covariability, Econometrica 86 (2018), 775 – 804. Mark Watson’s Fisher-Schultz lecture 2016. (Joint with MARK
WATSON.) Appendix and Replication files. Slides.
Low-Frequency Econometrics. In Advances in Economics and Econometrics: Eleventh World Congress of the Econometric Society, Volume II, ed. by B. Honoré, and L. Samuelson, Cambridge University Press (2017), 53 – 94.
(Joint with MARK WATSON.) Replication files. Slides.
Fixed-k Asymptotic Inference about Tail Properties, Journal of the American Statistical Association, 112 (2017), 1334 – 1343. (Joint with YULONG
WANG.) Replication files. Slides.
Credibility of Confidence Sets in Nonstandard Econometric Problems, Econometrica 84 (2016), 2183 – 2213.
(Joint with ANDRIY NORETS.) Supplementary Appendix. Slides.
Measuring Uncertainty about Long-Run Predictions. Review of Economic Studies 83 (2016), 1711 – 1740.
(Joint with MARK
WATSON.) Supplementary Appendix. Replication files. Slides.
Coverage Inducing Priors in Nonstandard Inference Problems. Journal of the American Statistical Association 111 (2016), 1233 – 1241. (Joint with ANDRIY NORETS.) Supplementary Appendix.
Inference with Few Heterogenous Clusters, Review of Economics and Statistics 98 (2016), 83 – 96.
(Joint with RUSTAM IBRAGIMOV.) Supplementary Appendix. Replication files. Slides.
Nearly Optimal Tests when a Nuisance Parameter is Present Under the Null Hypothesis, Econometrica 83 (2015), 771 – 811. (Joint with GRAHAM ELLIOTT and MARK
WATSON.) Supplementary Appendix. Replication files. Slides.
HAC Corrections for Strongly Autocorrelated Time Series, Journal of Business & Economic Statistics 32 (2014), 311 – 322.
Comments and Rejoinder. Slides.
Pre and Post Break Parameter Inference, Journal of Econometrics 180 (2014), 141 – 157. (Joint
with GRAHAM ELLIOTT.) 2012 working paper version. Slides.
Risk of Bayesian Inference in
Misspecified Models, and the Sandwich Covariance Matrix, Econometrica 81 (2013), 1805 – 1849. Slides.
Low-Frequency Robust Cointegration Testing, Journal of Econometrics 174 (2013), 66 – 81. (Joint
with MARK WATSON.) Slides.
Measuring Prior Sensitivity and Prior Informativeness in Large Bayesian Models,
Journal of Monetary Economics 59 (2012), 581 – 597. Slides.
Efficient Tests under a Weak Convergence
Assumption, Econometrica 79 (2011), 395 – 435. (Formerly circulated under the title "An
Alternative Sense of Asymptotic Efficiency".) Slides.
Efficient
Estimation of the Parameter Path in Unstable Time Series Models, Review
of
Economic Studies 77 (2010), 1508 – 1539. (Joint with PHILIPPE-EMMANUEL
PETALAS.) Supplement. Correction. Slides.
t-statistic
Based Correlation and
Heterogeneity Robust Inference, Journal of Business &
Economic Statistics 28 (2010), 453 – 468. (Joint with RUSTAM IBRAGIMOV.) Supplement. Slides.
Valid Inference in Partially Unstable
GMM
Models, Review
of
Economic Studies 76 (2009), 343 – 365. (Joint
with HONG LI.) Slides.
Testing Models of Low-Frequency
Variability, Econometrica
76 (2008), 979 – 1016. (Joint with MARK
WATSON.) Slides.
The Impossibility of Consistent
Discrimination between I(0) and I(1) Processes, Econometric Theory
24 (2008), 616 – 630. Slides.
A Theory of
Robust Long-Run Variance Estimation, Journal of
Econometrics 141 (2007), 1331 – 1352. (Substantially
different 2004 working paper).
Confidence Sets for the Date
of a Single
Break in Linear Time Series Regressions, Journal of
Econometrics 141 (2007), 1196 – 1218.
(Joint with GRAHAM
ELLIOTT.)
Efficient Tests for General
Persistent Time Variation in
Regression
Coefficients, Review
of
Economic Studies 73 (2006), 907 – 940. Formerly
circulated under the title “Optimally Testing General
Breaking
Processes in Linear Time Series Models”. (Joint
with GRAHAM
ELLIOTT.)
Are
Forecasters Reluctant to Revise their Predictions? Some German
Evidence, Journal of
Forecasting 25 (2006), 401 – 413. (Joint with
GEBHARD KIRCHGÄSSNER.)
Size and Power of Tests for
Stationarity in Highly
Autocorrelated Time
Series, Journal
of
Econometrics 128 (2005), 195 – 213.
Tests for Unit Roots and the
Initial Condition, Econometrica
71
(2003), 1269 – 1286. (Joint with GRAHAM
ELLIOTT.)
Comment on “HAR Inference: Recommendations for Practice” by E. Lazarus, D. J. Lewis and J. H. Stock, Journal of Business & Economic Statistics 36 (2018), 563 – 564.
Comment on “Unit Root Testing in Practice: Dealing with Uncertainty over the Trend
and Initial Condition” by D. I. Harvey, S. J. Leybourne and A. M. R.
Taylor, Econometric
Theory 25 (2009), 643 –
648.