Models with a persistent, time-varying error variance (i.e. GARCH) models, are mainly used in Macro to investigate whether uncertainty affects the conditional mean (i.e. GARCH-M).
However, even if we are not modeling GARCH in Mean effects, ignoring conditional heteroskedasticity, or "correcting" our coefficient standard errors for it by a White-type of correction can be problematic.
For one thing, a maximum likelihood approach can have almost infinite relative efficiency gains over OLS. Thus in a VAR context, ignoring the conditional variance-covariance process, can lead to poorly estimated coefficients and thus poorly estimated impulse responses.
For another, White (or Newey-West) standard errors are not generally appropriate in the case of a GARCH variance process.
Jim Hamilton has a great piece about these phenomena, with a couple interesting examples of how dealing with the conditional variance can change inferences about the conditional mean.
This is a situation I've seen in my own work. Here is an older piece with Mark Perry in the Journal of Finance about liquidity effects, and here is a joint piece with Haichun Ye in Economic Inquiry about the twin deficit phenomenon. In both cases modeling the conditional variance process changed inferences about the conditional mean.
Here is a recent piece by rising macro star Olivier Coibon in the AEJ: Macro which also demonstrates the importance of modeling the conditional variance process.
GARCH (or Stochastic Volatility if you prefer) in macro is still way under appreciated and under used.