Monday, October 22, 2012

MacGarch

Models with a persistent, time-varying error variance (i.e. GARCH) models, are mainly used in Macro to investigate whether uncertainty affects the conditional mean (i.e. GARCH-M).

However, even if we are not modeling GARCH in Mean effects, ignoring conditional heteroskedasticity, or "correcting" our coefficient standard errors for it by a White-type of correction can be problematic.

For one thing, a maximum likelihood approach can have almost infinite relative efficiency gains over OLS. Thus in a VAR context, ignoring the conditional variance-covariance process, can lead to poorly estimated coefficients and thus poorly estimated impulse responses.

For another, White (or Newey-West) standard errors are not generally appropriate in the case of a GARCH variance process.

Jim Hamilton has a great piece about these phenomena, with a couple interesting examples of how dealing with the conditional variance can change inferences about the conditional mean.

This is a situation I've seen in my own work. Here is an older piece with Mark Perry in the Journal of Finance about liquidity effects, and here is a joint piece with Haichun Ye in Economic Inquiry about the twin deficit phenomenon. In both cases modeling the conditional variance process changed inferences about the conditional mean.

Here is a recent piece by rising macro star Olivier Coibon in the AEJ: Macro which also demonstrates the importance of modeling the conditional variance process.

GARCH (or Stochastic Volatility if you prefer) in macro is still way under appreciated and under used.





4 comments:

Anonymous said...

"a maximum likelihood approach can have almost infinite relative efficiency gains over OLS"

When you say relative efficiency gains, what do you mean by that?

Angus said...

The theoretical variances of OLS coefficients can be arbitrarily large compared to the variance of the ML coefficients. So using ML lets you draw your coefficients from a distribution much more tightly surrounding the true values than does OLS.

Anonymous said...

What assurances do you have that the error distributions you use to construct the ML estimator are correct? If you know the error model for certain, ML is great and you get more/tighter information as you indicated, but if the error model is mis-specified then the OLS (or its cousin Weighted Least Squares which incorporates heteroskedasticity) is more robust to that mis-specification.

In a slightly different way of asking, what guarantee do you have that your tighter ML coefficient distribution is in any way correct (or more correct than a wider OLS one)?

I guess another important question is whether the models you're using are linear or nonlinear. The issues mentioned are likely less of a problem in the simpler linear case.

Anonymous said...

For the record, I totally agree with the point that people need to look at heteroskedasticity more carefully, in many different fields.