This is Jeff Hallman's Typepad Profile.
Join Typepad and start following Jeff Hallman's activity
Join Now!
Already a member? Sign In
Jeff Hallman
Recent Activity
Thanks for noticing, Nick. I wouldn't refer to myself as a serious econometrician these days, mostly I do programming. But I used to be. On the BVAR: It is well known in economic forecasting circles that simple models with few parameters tend to forecast better out of sample than more elaborate models with many parameters. Suppose you simulate a model like: y(t) = a + b1*x1(t) + b2*x2(t) + ... + bk*xk(t) + e(t) (1) where e(t) and xi(t) are independent and identically distributed, and with 1 > b1 > b2 > .... bk > 0. If the number of observations you have is large relative to k, then estimating a model that includes all k of the xi(t) will give you good out of sample forecasts. But if you don't have very many observations, you'll find that dropping some of the xi variables with small coefficients from the regression improves the model's forecasting ability. The variance of the estimated coefficients is smaller if you estimate fewer of them, and for prediction purposes, a precisely-estimated incorrect model is often better than an imprecisely-estimated correct model. A typical conventional vector autoregression (VAR) in macro has 4 lags of six variables. Throw in a constant, and this means each of the six equations has 25 parameters to estimate. You also have estimate the 21 parameters of the symmetric 6 by 6 covariance matrix. You typically only have 20 or 25 years of quarterly data to work with, which means you're estimating 171 parameters with only 480 to 600 observations. That doesn't sound too bad until you realize that there's likely to be considerable collinearity amongst your six variables. You are going to end up with estimated standard errors on your coefficients that are so large as to render most of them meaningless, and the out of sample forecasts will be quite poor. Litterman's BVAR is a form of ridge regression, an old technique used by statisticians to reduce the effective number of parameters estimated by biasing the estimates in a particular direction. It is one of a number of so-called "shrinkage" estimators. In traditional ridge regression, the coefficients are shrunk (biased) towards zero. The Litterman prior biases the coefficients towards the "six independent random walks" model. However, you can use the same technique to bias coefficients in some other direction. As a grad student many years ago, I worked for a while on shrinking a VAR towards a cointegration prior, but I never really finished it. One day somebody ought to pursue it. At any rate, the fact that BVAR's with the Litterman prior do about as well at forecasting most macroeconomic series as the big econometric models that used to be popular is one reason those big models have fallen out of favor. What's important in the context of this discussion is that they contain zero economic theory content, and yet they perform about as well as models with lots of built-in economic theories. It's true that BVAR's are lousy at predicting things like turning points or evaluating the effect of policy changes. But then again, that's also empirically true of the theory-based models.
1 reply
Of course the idea that because i = p + m - g in equilibrium, you can just set i to zero, assume (m - g) is constant, and presto, get p = the inverse of that constant, without considering either (i) where you started from, or (ii) how you get there from here, is just nuts. And the fact that Steve, and now you(?), and Jesus Fernandez-Villaverde on Mark Thoma's blog, agree with him, definitely confirms it's not random. There real big systemic problems with the economics that some people are learning (or not learning). And that Steve should be surprised that others find this controversial, is really surprising. Didn't he know what everyone else thinks, even if he does disagree Academic economists get rewarded for publishing papers. You don't get rewarded for understanding how the economy works, or how your predecessors like Patinkin and Leijonhufuvud thought it worked. You get rewarded for novelty, and especially for mathematical rigor. Economic modeling in this world is just an exercise in working out the logical implications of a set of assumptions. It doesn't really matter if your theory makes correct predictions if it's elegant enough. This is the arrogance of modern macroeconomics, a discipline that never admits in public how little it really knows. We've known for a long time how to evaluate economic theories. You start by fitting a good atheoretic statistical model, like Litterman's Bayesian Vector Autoregression (BVAR) to the variables of interest. The economic theory you want to evaluate implies some restrictions on the BVAR's coefficients and residual covariances should hold. Estimating a BVAR is an optimization problem, and you can do the optimizing subject to the restrictions implied by the theory. This gives you two estimated models, and you can do likelihood ratio tests to see if the data reject the theoretical restrictions or not. If they do, you're done: the theory is obviously wrong. But if the data don't reject the theory, you aren't done yet. You should compare the out-of-sample forecasting ability of the theory-restricted model with that of the atheoretic BVAR. If your theory doesn't lead to substantial improvements in the forecast, then it really isn't very informative about the way the world works. I have not been active in academic economics for many years, but my impression from afar is that very few models pass the first hurdle of not being obviously wrong. The ones that do are of the New Keynesian type. This is not really very surprising, because the New Keynesians have always paid a lot of attention to empirical data in coming up with their theories. The RBC crowd starts more from first principles, and there models are violently rejected by the data. However, I also don't believe that even the New Keynesian models forecast out of sample much better than a BVAR does. In that sense, none of really know much about how the economy really works. But there are a few empirical regularities, a.k.a. stylized facts, that we observe even if our attempts at modeling them are lacking. One is that Friedman is correct: inflation is always and everywhere a monetary phenomena. We know that central banks can create inflation or vanquish it. We don't need a theory to explain this because we've seen them do it, repeatedly. What we need in monetary policy are simple rules that are robust to uncertainty about how the economy actually works. That's always been the appeal of Taylor rules, and it applies even more to the Sumnerian policy of targeting a path for expected NGDP.
1 reply
Jeff Hallman is now following The Typepad Team
Aug 27, 2010