Hi
@Jaskarn Invertibility enables the translation of the moving average into an
autogressive process. I think groking invertibility may presume knowledge of time series dynamics that is introduced earlier in Deibold (ie, is a moderate/advanced topic for which the current FRM syllabus does not really provide proper
scaffolding). I think Charles Zaiontz (who makes the Real Statistics add-on is used in a few of our learning XLS and is highly recommended) has I think a good explanation here:
http://www.real-statistics.com/time...average-processes/invertibility-ma-processes/
... notice how the final step raises Θ to the j-th power? This effectively requires that -1.0 < Θ < +1.0 which is the invertibility condition
Diebold defines/explains it thusly (page 141):
"Note that the requirements of covariance stationarity (constant unconditional mean, constant and finite unconditional variance, autocorrelation depends only on displacement) are met for any MA(1) process, regardless of the values of its parameters. If, moreover, , then we say that the MA(1) process is invertible. In that case, we can “invert” the MA(1) process and express the current value of the series not in terms of a current shock and a lagged shock, but rather in terms of a current shock and lagged values of the series. That’s called an autoregressive representation. An autoregressive representation has a current shock and lagged observable values of the series on the right, whereas a moving average representation has a current shock and lagged unobservable shocks on the right ..."
... and goes on to convert the MA(1) process where theta is a parameter that multiplies by the lagged ε(t-i) which is a
shock into an autoregressive representation which, as in Charles Zaiontz final step, expresses the current value, y(i), as a function of the
previous value, y(i-1). In this way, invertibilty enables the autoregressive representation and, as Diebold says (emphasis mine), "Autoregressive representations are appealing to forecasters, because one way or another, if a model is to be used for real-world forecasting,
it’s got to link the present observables to the past history of observables, so that we can extrapolate to form a forecast of future observables based on present and past observables. S
uperficially, moving average models don’t seem to meet that requirement, because the current value of a series is expressed in terms of current and lagged unobservable shocks, not observable variables. But under the invertibility conditions that we’ve described, moving average processes have equivalent autoregressive representations. Thus, although we want autoregressive representations for forecasting, we don’t have to start with an autoregressive model. However, we typically restrict ourselves to invertible processes, because for forecasting purposes we want to be able to express current observables as functions of past observables." I hope that's helpful!
Stay connected