# P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views ↓
Last Message
1. ### P1.T2.300. Probability functions (Miller)

Hello @fccodart I just wanted to make sure that you read through all of the comments in this forum thread (there are 5 pages of discussions in this thread) to see if your question was already answered. The first question that was posted asks about the antiderivative formulas, and David...
Hello @fccodart I just wanted to make sure that you read through all of the comments in this forum thread (there are 5 pages of discussions in this thread) to see if your question was already...
Replies:
80
Views:
3,153
2. ### P1.T2.309. Probability Distributions I, Miller Chapter 4

@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on exam day, is there a way to think about this in a way that we can "quickly" understand that the 7/5...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on exam day, is there a way to think about this in a way that we can "quickly" understand that the 7/5...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement...
Pam Gordon ... 2 3
Replies:
59
Views:
1,529
3. ### P1.T2.310. Probability Distributions II, Miller Chapter 4

Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case" where a = 1 and b = 1. In fact, this variance is a special case of the covariance and itself further...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case" where a = 1 and b = 1. In fact, this variance is a special case of the covariance and itself further...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case"...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) +...
Pam Gordon ... 2 3
Replies:
50
Views:
1,288
4. ### P1.T2.209 T-statistic and confidence interval (Stock & Watson)

ok, clear. Thanks.
ok, clear. Thanks.
ok, clear. Thanks.
ok, clear. Thanks.
Replies:
81
Views:
1,225
5. ### P1.T2.202. Variance of sum of random variables (Stock & Watson)

Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a...
Replies:
61
Views:
1,192
6. ### P1.T2.301. Miller's probability matrix

Hi @Vita_lee1017 It's true that question 302.2 refers to a continuous random variable, while 302.1 refers to a discrete random variable, but the expected values are essentially similar: For a discrete probability distribution, the expected value = x(1)*p(1) + x(2)*p(2) + .... x(n)*p(n) = summation of x_i*f(x_i) For a continuous probability distribution, the expected value is essentially...
Hi @Vita_lee1017 It's true that question 302.2 refers to a continuous random variable, while 302.1 refers to a discrete random variable, but the expected values are essentially similar: For a discrete probability distribution, the expected value = x(1)*p(1) + x(2)*p(2) + .... x(n)*p(n) = summation of x_i*f(x_i) For a continuous probability distribution, the expected value is essentially...
Hi @Vita_lee1017 It's true that question 302.2 refers to a continuous random variable, while 302.1 refers to a discrete random variable, but the expected values are essentially similar: For a discrete probability distribution, the expected value = x(1)*p(1) + x(2)*p(2) + .... x(n)*p(n) =...
Hi @Vita_lee1017 It's true that question 302.2 refers to a continuous random variable, while 302.1 refers to a discrete random variable, but the expected values are essentially similar: For a...
Fran ... 2
Replies:
38
Views:
1,148
7. ### P1.T2.312. Mixture distributions (Miller)

Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
Replies:
43
Views:
1,130
8. ### P1.T2.303 Mean and variance of continuous probability density functions (pdf) (Miller)

Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Replies:
50
Views:
1,127
9. ### P1.T2.504. Copulas (Hull)

Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
Replies:
25
Views:
1,088
10. ### P1.T2.503. One-factor model (Hull)

@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
Replies:
20
Views:
1,033
11. ### P1.T2.307. Skew and Kurtosis (Miller)

Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the 1/3" with 1/n*Σ (xi - μ)^2 = (1/3)* [(1-3)^2 + (2-3)^2 +(6-3)^2] = 4.67. Thank you for your attention to...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the 1/3" with 1/n*Σ (xi - μ)^2 = (1/3)* [(1-3)^2 + (2-3)^2 +(6-3)^2] = 4.67. Thank you for your attention to...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) =...
Fran ... 2
Replies:
32
Views:
980
12. ### L1.T2.111 Binomial & Poisson (Rachev)

@lavi5h, keep calm and focus on the topics you are good at. Exams highlight that you don't have to be an expert across all subjects in order to pass.
@lavi5h, keep calm and focus on the topics you are good at. Exams highlight that you don't have to be an expert across all subjects in order to pass.
@lavi5h, keep calm and focus on the topics you are good at. Exams highlight that you don't have to be an expert across all subjects in order to pass.
@lavi5h, keep calm and focus on the topics you are good at. Exams highlight that you don't have to be an expert across all subjects in order to pass.
Replies:
48
Views:
822
13. ### P1.T2.305. Minimum variance hedge (Miller)

It absolutely helps. Thank you!
It absolutely helps. Thank you!
It absolutely helps. Thank you!
It absolutely helps. Thank you!
Fran ... 2
Replies:
24
Views:
800
14. ### P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models (Hull)

Hi @Xiconeto My question 502.3 is modeled after Hull's 11.6 (and his example in the reading). These are tedious, to be sure. GARCH(1,1) is updating both the correlation and each of the volatilities. To answer your question, yes use the omega directly in the GARCH formula. In the case of the covariance (correlation) update, where ω = 0.0000010, the updated correlation is given by ω +...
Hi @Xiconeto My question 502.3 is modeled after Hull's 11.6 (and his example in the reading). These are tedious, to be sure. GARCH(1,1) is updating both the correlation and each of the volatilities. To answer your question, yes use the omega directly in the GARCH formula. In the case of the covariance (correlation) update, where ω = 0.0000010, the updated correlation is given by ω +...
Hi @Xiconeto My question 502.3 is modeled after Hull's 11.6 (and his example in the reading). These are tedious, to be sure. GARCH(1,1) is updating both the correlation and each of the volatilities. To answer your question, yes use the omega directly in the GARCH formula. In the case of the...
Hi @Xiconeto My question 502.3 is modeled after Hull's 11.6 (and his example in the reading). These are tedious, to be sure. GARCH(1,1) is updating both the correlation and each of the...
Replies:
25
Views:
796
15. ### P1.T2.304. Covariance (Miller)

Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Fran ... 2
Replies:
28
Views:
787
16. ### P1.T2.306. Calculate the mean and variance of sums of variables. (Miller)

Hi @David Harper CFA FRM .. A special mention to qn 306.2 in the way it manages to test you on how well you understand the co variance (and variance) properties.
Hi @David Harper CFA FRM .. A special mention to qn 306.2 in the way it manages to test you on how well you understand the co variance (and variance) properties.
Hi @David Harper CFA FRM .. A special mention to qn 306.2 in the way it manages to test you on how well you understand the co variance (and variance) properties.
Hi @David Harper CFA FRM .. A special mention to qn 306.2 in the way it manages to test you on how well you understand the co variance (and variance) properties.
Fran ... 2 3
Replies:
43
Views:
738
17. ### P1.T2.206. Variance of sample average (Stock & Watson)

I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
Replies:
24
Views:
678
18. ### P1.T2.212. Difference between two means (Stock & Watson)

That was a long message to type on a phone - got kind of tired towards the end!
That was a long message to type on a phone - got kind of tired towards the end!
That was a long message to type on a phone - got kind of tired towards the end!
That was a long message to type on a phone - got kind of tired towards the end!
Replies:
34
Views:
655
19. ### Quiz-T2P1.T2.405. Distributions I

yes, I forgot to include the square term in my post; I had the 284. Ok, so we need to adjust the final number by n/(n-1). Good. Thank you!
yes, I forgot to include the square term in my post; I had the 284. Ok, so we need to adjust the final number by n/(n-1). Good. Thank you!
yes, I forgot to include the square term in my post; I had the 284. Ok, so we need to adjust the final number by n/(n-1). Good. Thank you!
yes, I forgot to include the square term in my post; I had the 284. Ok, so we need to adjust the final number by n/(n-1). Good. Thank you!
Replies:
26
Views:
575
20. ### P1.T2.501. More Bayes Theorem (Miller)

Oh thanks. Got it. I need to remember that the conditional variable (red) in the numerator has to show up in the denominator: P(B|U) = P(BU) / P(U)
Oh thanks. Got it. I need to remember that the conditional variable (red) in the numerator has to show up in the denominator: P(B|U) = P(BU) / P(U)
Oh thanks. Got it. I need to remember that the conditional variable (red) in the numerator has to show up in the denominator: P(B|U) = P(BU) / P(U)
Oh thanks. Got it. I need to remember that the conditional variable (red) in the numerator has to show up in the denominator: P(B|U) = P(BU) / P(U)
Replies:
24
Views:
545
21. ### L1.T2.104 Exponentially weighted moving average (EWMA) (Hull)

@Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
@Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
@Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But...
@Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question...
Replies:
27
Views:
536
22. ### P1.T2.311. Probability Distributions III, Miller

Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
Replies:
25
Views:
526
23. ### P1.T2.314. Miller's one- and two-tailed hypotheses

Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Replies:
26
Views:
484
24. ### P1.T2.500. Bayes theorem (Miller)

@mansoor_memon I just don't understand your follow-up, you are solving for one joint probability. I tried to explain why that's not the full Bayes. 30% x 99% is (just) a single joint probability. Sorry, I need more to go on.
@mansoor_memon I just don't understand your follow-up, you are solving for one joint probability. I tried to explain why that's not the full Bayes. 30% x 99% is (just) a single joint probability. Sorry, I need more to go on.
@mansoor_memon I just don't understand your follow-up, you are solving for one joint probability. I tried to explain why that's not the full Bayes. 30% x 99% is (just) a single joint probability. Sorry, I need more to go on.
@mansoor_memon I just don't understand your follow-up, you are solving for one joint probability. I tried to explain why that's not the full Bayes. 30% x 99% is (just) a single joint probability....
Replies:
28
Views:
464
25. ### P1.T2.208. Sample mean estimators (Stock & Watson)

Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Replies:
33
Views:
461
26. ### PQ-T2P1.T2.317. Continuous distributions (Topic review)

@Jaskarn The second distribution has a mean of 4 as given by N(µ, σ^2) = N(4, 1^2). See the dashed blue distribution. We are looking for the probability that the normal random variable X will be less than zero. If we used Z = (4-0)/sqrt(1), you'll notice we would be looking for Prob[Z < 4] which rounds to 100.0%. As the actual distribution is effectively, entirely to the right of zero, we...
@Jaskarn The second distribution has a mean of 4 as given by N(µ, σ^2) = N(4, 1^2). See the dashed blue distribution. We are looking for the probability that the normal random variable X will be less than zero. If we used Z = (4-0)/sqrt(1), you'll notice we would be looking for Prob[Z < 4] which rounds to 100.0%. As the actual distribution is effectively, entirely to the right of zero, we...
@Jaskarn The second distribution has a mean of 4 as given by N(µ, σ^2) = N(4, 1^2). See the dashed blue distribution. We are looking for the probability that the normal random variable X will be less than zero. If we used Z = (4-0)/sqrt(1), you'll notice we would be looking for Prob[Z < 4]...
@Jaskarn The second distribution has a mean of 4 as given by N(µ, σ^2) = N(4, 1^2). See the dashed blue distribution. We are looking for the probability that the normal random variable X will be...
Replies:
12
Views:
453
27. ### Quiz-T2P1.T2.403. Probabilities

Hi @lRRAngle Excellent! It's not coincident. You solved directly for the conditional Prob(speculative | default) = Joint Prob (S, D) / Unconditional Prob (D) = 18/20 = 90%. So you basically inferred the probability matrix directly (see below, assumptions given yellow). It's totally consistent with the elongated Bayes. Thanks,
Hi @lRRAngle Excellent! It's not coincident. You solved directly for the conditional Prob(speculative | default) = Joint Prob (S, D) / Unconditional Prob (D) = 18/20 = 90%. So you basically inferred the probability matrix directly (see below, assumptions given yellow). It's totally consistent with the elongated Bayes. Thanks,
Hi @lRRAngle Excellent! It's not coincident. You solved directly for the conditional Prob(speculative | default) = Joint Prob (S, D) / Unconditional Prob (D) = 18/20 = 90%. So you basically inferred the probability matrix directly (see below, assumptions given yellow). It's totally consistent...
Hi @lRRAngle Excellent! It's not coincident. You solved directly for the conditional Prob(speculative | default) = Joint Prob (S, D) / Unconditional Prob (D) = 18/20 = 90%. So you basically...
Replies:
18
Views:
451
28. ### L1.T2.103 Weighting schemes to estimate volatility (Hull)

Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied...
Replies:
11
Views:
449
29. ### PQ-T2P1.T2.321. Univariate linear regression (topic review)

Many thanks, David!!
Many thanks, David!!
Many thanks, David!!
Many thanks, David!!
Replies:
34
Views:
439
30. ### P1.T2.204. Joint, marginal, and conditional probability functions (Stock & Watson)

Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard...
Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) =...
Replies:
10
Views:
402