What's new

P1.T2.QA Ch.11 Non Stationary Time Series - Non Linear Trends

dtammerz

Member
Subscriber
Hello,

I would like some further clarification on the log-linear function that appears on page 5 of the Ch.11 materials:
1597588917616.png
GARP's Ch.11 materials writes the formula differently so i'm having some trouble reconciling. Forgive me if this should be something basic or intuitive (I don't remember much of high school math now) but in GARP's material, there is no "ln" in front of "δ0"in the formula that is shown next to 11.4 on p.189.
1597588997926.png

are these two formulas supposed to be equivalent? What difference does it make if the "ln" is on the left side of the equation, or on both sides?

Thank you
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Hi @dtammerz Yes, the models (formulas) are equivalent; there is no substantive difference because β(0) is a constant and ln[β(0)] is another constant and therefore, in the general form, can be represented without the LN(.). Let's illustrate with an exponential trend that begins at $5.00 and grows constantly at 8.0% per year. Under this assumption of β(0) = $5.00 and β(1) = 8.0%, the future price P(t) = β(0)*exp[β(1)*TIME]; e.g., P(3) = 5.0*exp(8.0%*3) = $6.36. FRM's will immediately recognize this as the terminal price under an assumption of continuously compounded returns; i.e., the log return = ln[P(t)/P(t-1)] and here the log return is constant at β(1) = 8.0%.

This is a log-linear trend (https://en.wikipedia.org/wiki/Log-linear_model) because its natural log is linear:
  • If P(t) = β(0)*exp[β(1)*TIME], then taking LN of each side:
  • LN[P(t)] = LN[β(0)] + β(1)*TIME, which is the log-linear model that is equivalent to GARP's displayed log-linear model:
  • LN[Y(t)] = (0) + (1)*t ... because both LN[β(0)] are (0) constants in the model. Consequently, in GARP's 11.4 expression, the time series model, at time zero, begins at exp[(0)] whereas our (based on Diebold) begins at exp(LN[β(0)]) = β(0) such that β(0) is already the native intercept.
Your displayed 11.4 does contain an additional step, so that is really the difference: LN[Y(t)] = (0) + (1)*t + ε(t) is the log-linear model that is equivalent to LN[P(t)] = LN[β(0)] + β(1)*TIME; but then this log-linear model can be generalized (extended) to a log-quadratic model by adding TIME^i or t^i terms. So the (2)*t^2 is the difference. I hope that's helpful,
 
Last edited:

dtammerz

Member
Subscriber
Hi @dtammerz Yes, the models (formulas) are equivalent; there is no substantive difference because β(0) is a constant and ln[β(0)] is another constant and therefore, in the general form, can be represented without the LN(.). Let's illustrate with an exponential trend that begins at $5.00 and grows constantly at 8.0% per year. Under this assumption of β(0) = $5.00 and β(1) = 8.0%, the future price P(t) = β(0)*exp[β(1)*TIME]; e.g., P(3) = 5.0*exp(8.0%*3) = $6.36. FRM's will immediately recognize this as the terminal price under an assumption of continuously compounded returns; i.e., the log return = ln[P(t)/P(t-1)] and here the log return is constant at β(1) = 8.0%.

This is a log-linear trend (https://en.wikipedia.org/wiki/Log-linear_model) because its natural log is linear:
  • If P(t) = β(0)*exp[β(1)*TIME], then taking LN of each side:
  • LN[P(t)] = LN[β(0)] + β(1)*TIME, which is the log-linear model that is equivalent to GARP's displayed log-linear model:
  • LN[Y(t)] = (0) + (1)*t ... because both LN[β(0)] are (0) constants in the model. Consequently, in GARP's 11.4 expression, the time series model, at time zero, begins at exp[(0)] whereas our (based on Diebold) begins at exp(LN[β(0)]) = β(0) such that β(0) is already the native intercept.
Your displayed 11.4 does contain an additional step, so that is really the difference: LN[Y(t)] = (0) + (1)*t + ε(t) is the log-linear model that is equivalent to LN[P(t)] = LN[β(0)] + β(1)*TIME; but then this log-linear model can be generalized (extended) to a log-quadratic model by adding TIME^i or t^i terms. So the (2)*t^2 is the difference. I hope that's helpful,
thank you, this is helpful!
 
Top