P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
  1. David Harper CFA FRM

    L1.T2.94 Forecasting (prediction) error

    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank...
    Replies:
    2
    Views:
    79
  2. Suzanne Evans

    P1.T2.216. Regression sums of squares: ESS, SSR, and TSS

    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the...
    Replies:
    13
    Views:
    236
  3. Nicole Seaman

    PQ-T2 P1.T2.319. Probabilities (Topic Review)

    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 =...
    Replies:
    11
    Views:
    297
  4. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    723
  5. Fran

    P1.T2.301. Miller's probability matrix

    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve...
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not...
    Replies:
    23
    Views:
    750
  6. Nicole Seaman

    PQ-T2 P1.T2.322. Multivariate linear regression (topic review)

    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values...
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error =...
    Replies:
    6
    Views:
    144
  7. David Harper CFA FRM

    L1.T1.92 Coefficients of determination and correlation

    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as...
    Replies:
    9
    Views:
    104
  8. David Harper CFA FRM

    L1.T2.124 Exponential versus Poisson

    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Replies:
    14
    Views:
    199
  9. David Harper CFA FRM

    L1.T2.85 Sample regression function (SRF)

    Thanks David.
    Thanks David.
    Thanks David.
    Thanks David.
    Replies:
    7
    Views:
    71
  10. David Harper CFA FRM

    P1.T2.202. Variance of sum of random variables

    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the...
    Replies:
    53
    Views:
    1,016
  11. Nicole Seaman

    PQ-T2 P1.T2.321. Univariate linear regression (topic review)

    Really got it now. Thanks very much :)
    Really got it now. Thanks very much :)
    Really got it now. Thanks very much :)
    Really got it now. Thanks very much :)
    Replies:
    15
    Views:
    234
  12. Nicole Seaman

    PQ-T2 P1.T2.318. Distributional moments (Topic review)

    Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my favorite expression). I don't even like fat tails, I like "heavy tails" or "light tails" because they...
    Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my favorite expression). I don't even like fat tails, I like "heavy tails" or "light tails" because they...
    Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not view kurtosis as any function of the peak, I view kurtosis as a measure of "tail heaviness" (is my...
    Hi @RobKing Right, about kurtosis, there is much previous discussion in this forum (going years back). Based on the math (i.e., kurtosis is a standardized fourth moment), personally I do not...
    Replies:
    8
    Views:
    167
  13. David Harper CFA FRM

    L1.T2.89 OLS standard errors

    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it),...
    Replies:
    11
    Views:
    181
  14. Nicole Seaman

    PQ-T2 P1.T2.324. Estimating volatility (Topic Review)

    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock....
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ...
    Replies:
    7
    Views:
    231
  15. Suzanne Evans

    Question 77: P value

    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel,...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio =...
    Replies:
    3
    Views:
    26
  16. David Harper CFA FRM

    L1.T2.68 Normal distribution

    This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
    This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
    This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and +1 standard deviation from the mean.....similar with the other figures.
    This is just something that you need to remember - some call it the 68-95-99 rule! Not sure what more can be said - approximately 68% of the distribution lied between -1 standard deviation and...
    Replies:
    2
    Views:
    68
  17. Suzanne Evans

    P1.T2.217. Regression coefficients (Stock & Watson)

    Hi @luccacerf Yes, exactly. You are correct that the critical t-value at one-tailed 95% confidence with 2 d.f. is 4.30 per (just using Excel) = T.INV(97.5%, 2) = 4.30, but we'd have 48 - 2 = 46 df, such that T.INV(97.5%, 46) = 2.013 or T.INV.2T(5%, 46) = 2.013 is the two-tailed 95.0% critical value. And for n > 30, we can safely approximate this with the standard normal's analogous 1.96...
    Hi @luccacerf Yes, exactly. You are correct that the critical t-value at one-tailed 95% confidence with 2 d.f. is 4.30 per (just using Excel) = T.INV(97.5%, 2) = 4.30, but we'd have 48 - 2 = 46 df, such that T.INV(97.5%, 46) = 2.013 or T.INV.2T(5%, 46) = 2.013 is the two-tailed 95.0% critical value. And for n > 30, we can safely approximate this with the standard normal's analogous 1.96...
    Hi @luccacerf Yes, exactly. You are correct that the critical t-value at one-tailed 95% confidence with 2 d.f. is 4.30 per (just using Excel) = T.INV(97.5%, 2) = 4.30, but we'd have 48 - 2 = 46 df, such that T.INV(97.5%, 46) = 2.013 or T.INV.2T(5%, 46) = 2.013 is the two-tailed 95.0% critical...
    Hi @luccacerf Yes, exactly. You are correct that the critical t-value at one-tailed 95% confidence with 2 d.f. is 4.30 per (just using Excel) = T.INV(97.5%, 2) = 4.30, but we'd have 48 - 2 = 46...
    Replies:
    11
    Views:
    195
  18. Nicole Seaman

    P1.T2.312. Mixture distributions

    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
    Replies:
    43
    Views:
    862
  19. PortoMarco79

    Miller, Chapter 2 video: Probabilities

    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Replies:
    4
    Views:
    26
  20. Nicole Seaman

    P1.T2.314. Miller's one- and two-tailed hypotheses

    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%....
    Replies:
    18
    Views:
    318
  21. Nicole Seaman

    P1.T2.602. Bootstrapping (Brooks)

    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
    Replies:
    4
    Views:
    99
  22. Nicole Seaman

    P1.T2.315. Miller's hypothesis tests, continued

    thanks dear @David Harper CFA FRM for your helpfull answer
    thanks dear @David Harper CFA FRM for your helpfull answer
    thanks dear @David Harper CFA FRM for your helpfull answer
    thanks dear @David Harper CFA FRM for your helpfull answer
    Replies:
    10
    Views:
    159
  23. Suzanne Evans

    P1.T2.311. Probability Distributions III, Miller

    Hi @FRMeugene it may not have a direct reference (to be honest, I don't really have time to look for the cross-referenced source in every question: many of our questions are more detailed than our summaries, so it's not productive for me. I hope you understand.) Thanks,
    Hi @FRMeugene it may not have a direct reference (to be honest, I don't really have time to look for the cross-referenced source in every question: many of our questions are more detailed than our summaries, so it's not productive for me. I hope you understand.) Thanks,
    Hi @FRMeugene it may not have a direct reference (to be honest, I don't really have time to look for the cross-referenced source in every question: many of our questions are more detailed than our summaries, so it's not productive for me. I hope you understand.) Thanks,
    Hi @FRMeugene it may not have a direct reference (to be honest, I don't really have time to look for the cross-referenced source in every question: many of our questions are more detailed than our...
    Replies:
    20
    Views:
    310
  24. David Harper CFA FRM

    L1.T2.113 Rachev's exponential

    Hi @hellohi, it is small topic, I remember it studying for part 1. It is useful for part 2, I don't think direct question would be asked on this topic. But if you liked it study it. I found it useful for my knowledge!
    Hi @hellohi, it is small topic, I remember it studying for part 1. It is useful for part 2, I don't think direct question would be asked on this topic. But if you liked it study it. I found it useful for my knowledge!
    Hi @hellohi, it is small topic, I remember it studying for part 1. It is useful for part 2, I don't think direct question would be asked on this topic. But if you liked it study it. I found it useful for my knowledge!
    Hi @hellohi, it is small topic, I remember it studying for part 1. It is useful for part 2, I don't think direct question would be asked on this topic. But if you liked it study it. I found it...
    Replies:
    10
    Views:
    172
  25. Suzanne Evans

    P1.T2.212. Difference between two means

    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    Replies:
    34
    Views:
    614
  26. Nicole Seaman

    P1.T2.500. Bayes theorem

    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Replies:
    25
    Views:
    345
  27. David Harper CFA FRM

    L1.T2.121 Extreme value distributions

    Hi @SheldonZ Jayanthi Sankaran[/USER] is correct: extreme value distributions was previously in the FRM Part 1 (Topic 2) because the assigned distribution reading included EV, but Miller doesn't address it, so EVT currently is only to be found in FRM Part 2 (Topic 6) and nowhere in Part 1; i.e., this is on older question. For Part 1, therefore, you don't need to worry about it. For Part 2,...
    Hi @SheldonZ Jayanthi Sankaran[/USER] is correct: extreme value distributions was previously in the FRM Part 1 (Topic 2) because the assigned distribution reading included EV, but Miller doesn't address it, so EVT currently is only to be found in FRM Part 2 (Topic 6) and nowhere in Part 1; i.e., this is on older question. For Part 1, therefore, you don't need to worry about it. For Part 2,...
    Hi @SheldonZ Jayanthi Sankaran[/USER] is correct: extreme value distributions was previously in the FRM Part 1 (Topic 2) because the assigned distribution reading included EV, but Miller doesn't address it, so EVT currently is only to be found in FRM Part 2 (Topic 6) and nowhere in Part 1;...
    Hi @SheldonZ Jayanthi Sankaran[/USER] is correct: extreme value distributions was previously in the FRM Part 1 (Topic 2) because the assigned distribution reading included EV, but Miller doesn't...
    Replies:
    4
    Views:
    69
  28. David Harper CFA FRM

    L1.T2.79 Hypothesis testing

    Thank you @Nicole Manley, your link is the correct reference :) @SheldonZ I fixed it above, but it's the same as Nicole already provided. Thanks!
    Thank you @Nicole Manley, your link is the correct reference :) @SheldonZ I fixed it above, but it's the same as Nicole already provided. Thanks!
    Thank you @Nicole Manley, your link is the correct reference :) @SheldonZ I fixed it above, but it's the same as Nicole already provided. Thanks!
    Thank you @Nicole Manley, your link is the correct reference :) @SheldonZ I fixed it above, but it's the same as Nicole already provided. Thanks!
    Replies:
    10
    Views:
    140
  29. Nicole Seaman

    P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each...
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should...
    Replies:
    4
    Views:
    109
  30. Nicole Seaman

    P1.T2.400. Fabozzi on simulations

    Hi [USER=38486]@ good question: no, the 95% confidence interval is not used because it cannot be used and is not needed. The CI is given by µ(sample) +/- (critical t)*(standard error), where (standard error) = (sample standard deviation)/sqrt(N). The 1/sqrt(N) indicates the key relationship between the length of the interval and sample size: for any given µ, critical-t, and sample standard...
    Hi [USER=38486]@ good question: no, the 95% confidence interval is not used because it cannot be used and is not needed. The CI is given by µ(sample) +/- (critical t)*(standard error), where (standard error) = (sample standard deviation)/sqrt(N). The 1/sqrt(N) indicates the key relationship between the length of the interval and sample size: for any given µ, critical-t, and sample standard...
    Hi [USER=38486]@ good question: no, the 95% confidence interval is not used because it cannot be used and is not needed. The CI is given by µ(sample) +/- (critical t)*(standard error), where (standard error) = (sample standard deviation)/sqrt(N). The 1/sqrt(N) indicates the key relationship...
    Hi [USER=38486]@ good question: no, the 95% confidence interval is not used because it cannot be used and is not needed. The CI is given by µ(sample) +/- (critical t)*(standard error), where...
    Replies:
    3
    Views:
    191

Thread Display Options

Loading...