Hi
@Mat5 Right, that step is just the application of the GARCH(1,1). Just as
today's updated estimate of variance is given by GARCH(1,1) per σ^2(
n) = ω + α*µ^2(
n-1) + β*σ^2(
n-1), we can say that tomorrow's
expected variance will a function of today's variance and return: E[σ^2(
n+1)] = ω + α*µ^2(
n) + β*σ^2(
n), so we can drop the E[.] on the right side.
The linked thread is interesting. I have a simpler reason for accepting Hull's assumption that E[µ^2(n+t-1)] = σ^2(n+t-1). Please note I mistakenly wrote the right-hand side E[.] above, which is a mistake. The assumption that makes total sense to me is E[µ^2(n+t-1)] = σ^2(n+t-1). This is simply to
expect that the latest return-squared is equal to the most recent variance estimate. Lacking any other information, our best guess of the squared return should be the most recent variance: that is the one squared return that would not change the variance estimate. We can think of GARCH(1,1) as weighted average of three variancess: γ*variance(long run) + β*(recent variance estimate) + α*(the "innovation" that is basically a one-day variance). The
squared return is a one-day variance, so it's slightly re-weighting the average variance (technically, GARCH is more technical etc but IMO this is not "at variance" with the theory pun intended). Thanks!
Stay connected