# P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
1. ### P1.T2.305. Minimum variance hedge (Miller)

Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all$100.0 to some combination. That's what i meant by constraint. Question 305.2 instead starts with the...
Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all$100.0 to some combination. That's what i meant by constraint. Question 305.2 instead starts with the...
Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all... Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) =... Replies: 15 Views: 529 2. ### P1.T2.209 T-statistic and confidence interval Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail) characterizes the binomial distribution. The exam will also expect you to know the variance of a Bernoulli... Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail) characterizes the binomial distribution. The exam will also expect you to know the variance of a Bernoulli... Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail)... Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR... Replies: 44 Views: 896 3. ### P1.T2.214. Regression lines (Stock & Watson) Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =... Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =... Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore,... Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x);... Replies: 13 Views: 238 4. ### R16.P1.T2. Hull - expected value of u(n+t-1)^2 You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)... You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)... You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is... You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E(... Replies: 1 Views: 17 5. ### P1.T2.222. Homoskedasticity-only F-statistic @uness_o7 brilliant, I just did not see it. Thank you! @uness_o7 brilliant, I just did not see it. Thank you! @uness_o7 brilliant, I just did not see it. Thank you! @uness_o7 brilliant, I just did not see it. Thank you! Replies: 14 Views: 305 6. ### L1.T2.93 Jarque-Bera I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed... I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed... I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock... I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software)... Replies: 14 Views: 133 7. ### P1.T2.221. Joint null hypothesis in multiple OLS regression it went out of my head at that very moment, thanks. it went out of my head at that very moment, thanks. it went out of my head at that very moment, thanks. it went out of my head at that very moment, thanks. Replies: 14 Views: 311 8. ### P1.T2.504. Copulas (Hull) Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they... Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they... Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more... Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher... Replies: 25 Views: 778 9. ### P1.T2.508. Wold's theorem [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ... [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ... [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ... [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ... Replies: 4 Views: 182 10. ### L1.T2.94 Forecasting (prediction) error Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you! Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you! Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you! Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank... Replies: 2 Views: 80 11. ### P1.T2.216. Regression sums of squares: ESS, SSR, and TSS Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the... Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the... Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so... Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the... Replies: 13 Views: 246 12. ### PQ-T2P1.T2.319. Probabilities (Topic Review) Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that$60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 =$100/(1+4.000%)^12, Down to: $60.00 =... Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that$60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 =$100/(1+4.000%)^12, Down to: $60.00 =... Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that$60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for...
Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that \$60.00 =...
Replies:
11
Views:
298
13. ### P1.T2.503. One-factor model (Hull)

@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
Replies:
20
Views:
731
14. ### P1.T2.301. Miller's probability matrix

For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve...
For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not...
Fran ... 2
Replies:
23
Views:
781
15. ### PQ-T2P1.T2.322. Multivariate linear regression (topic review)

Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values...
Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error =...
Replies:
6
Views:
145
16. ### L1.T1.92 Coefficients of determination and correlation

@Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
@Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
@Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an...
@Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as...
Replies:
9
Views:
105
17. ### L1.T2.124 Exponential versus Poisson

Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
Replies:
14
Views:
201
18. ### L1.T2.85 Sample regression function (SRF)

Thanks David.
Thanks David.
Thanks David.
Thanks David.
Replies:
7
Views:
72
19. ### P1.T2.202. Variance of sum of random variables

David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the...
Replies:
53
Views:
1,024
20. ### PQ-T2P1.T2.321. Univariate linear regression (topic review)

Really got it now. Thanks very much
Really got it now. Thanks very much
Really got it now. Thanks very much
Really got it now. Thanks very much
Replies:
15
Views:
241
21. ### PQ-T2P1.T2.318. Distributional moments (Topic review)

Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my favorite expression). I don't even like fat tails, I like "heavy tails" or "light tails" because they...
Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my favorite expression). I don't even like fat tails, I like "heavy tails" or "light tails" because they...
Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my...
Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not...
Replies:
8
Views:
168
22. ### L1.T2.89 OLS standard errors

Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression...
Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it),...
Replies:
11
Views:
184
23. ### PQ-T2P1.T2.324. Estimating volatility (Topic Review)

Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock....
Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ...
Replies:
7
Views:
233
24. ### Question 77: P value

Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel,...
Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio =...
Replies:
3
Views:
26
25. ### L1.T2.68 Normal distribution

This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and...
Replies:
2
Views:
68
26. ### P1.T2.312. Mixture distributions

Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
Replies:
43
Views:
900
27. ### Miller, Chapter 2 video: Probabilities

Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Replies:
4
Views:
26
28. ### P1.T2.314. Miller's one- and two-tailed hypotheses

Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the...
Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%....
Replies:
18
Views:
327
29. ### P1.T2.602. Bootstrapping (Brooks)

a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
Replies:
4
Views:
103