P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views ↓
Last Message
  1. Suzanne Evans

    P1.T2.300. Probability functions (Miller)

    It surely does, Thank you
    It surely does, Thank you
    It surely does, Thank you
    It surely does, Thank you
    Replies:
    83
    Views:
    2,685
  2. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,294
  3. David Harper CFA FRM

    P1.T2.202. Variance of sum of random variables (Stock & Watson)

    Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
    Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
    Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a...
    Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a...
    Replies:
    61
    Views:
    1,149
  4. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    48
    Views:
    1,129
  5. Suzanne Evans

    P1.T2.303 Mean and variance of continuous probability density functions (pdf) (Miller)

    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which...
    Replies:
    49
    Views:
    1,036
  6. Suzanne Evans

    P1.T2.209 T-statistic and confidence interval (Stock & Watson)

    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Replies:
    53
    Views:
    1,018
  7. Nicole Seaman

    P1.T2.312. Mixture distributions (Miller)

    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
    Replies:
    43
    Views:
    1,010
  8. Fran

    P1.T2.301. Miller's probability matrix

    Hi @tkvfrm Your expression is correctly showing the probability (i.e., the density or pdf) for each value X ∈ {1, 2, 3} such that, given we have solved for the density f(x) =1/36*x^3, it is true that the sum of the probabilities should equal 100% = 1/36*(1)^3 + 1/36*(2)^3 + 1/36*(3)^3 = 1/36 + 8/36 + 27/36 = 2.8% + 22.2% + 75.0%. But we want the mean which requires the summation of x*f(x) =...
    Hi @tkvfrm Your expression is correctly showing the probability (i.e., the density or pdf) for each value X ∈ {1, 2, 3} such that, given we have solved for the density f(x) =1/36*x^3, it is true that the sum of the probabilities should equal 100% = 1/36*(1)^3 + 1/36*(2)^3 + 1/36*(3)^3 = 1/36 + 8/36 + 27/36 = 2.8% + 22.2% + 75.0%. But we want the mean which requires the summation of x*f(x) =...
    Hi @tkvfrm Your expression is correctly showing the probability (i.e., the density or pdf) for each value X ∈ {1, 2, 3} such that, given we have solved for the density f(x) =1/36*x^3, it is true that the sum of the probabilities should equal 100% = 1/36*(1)^3 + 1/36*(2)^3 + 1/36*(3)^3 = 1/36 +...
    Hi @tkvfrm Your expression is correctly showing the probability (i.e., the density or pdf) for each value X ∈ {1, 2, 3} such that, given we have solved for the density f(x) =1/36*x^3, it is true...
    Replies:
    25
    Views:
    931
  9. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    887
  10. Fran

    P1.T2.307. Skew and Kurtosis (Miller)

    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    Replies:
    30
    Views:
    857
  11. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    839
  12. David Harper CFA FRM

    L1.T2.111 Binomial & Poisson (Rachev)

    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean =...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And,...
    Replies:
    44
    Views:
    793
  13. Fran

    P1.T2.304. Covariance (Miller)

    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or...
    Replies:
    27
    Views:
    707
  14. Fran

    P1.T2.305. Minimum variance hedge (Miller)

    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    Replies:
    21
    Views:
    686
  15. Suzanne Evans

    P1.T2.212. Difference between two means (Stock & Watson)

    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    Replies:
    34
    Views:
    640
  16. Suzanne Evans

    P1.T2.206. Variance of sample average (Stock & Watson)

    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    Replies:
    24
    Views:
    638
  17. Fran

    P1.T2.306. Calculate the mean and variance of sums of variables. (Miller)

    thanks both. Strange (to me at least) but I get it :)
    thanks both. Strange (to me at least) but I get it :)
    thanks both. Strange (to me at least) but I get it :)
    thanks both. Strange (to me at least) but I get it :)
    Replies:
    37
    Views:
    624
  18. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models (Hull)

    Hi @Spinozzi That's a fair observation. I did parrot Hull's language here, such that he does refer to these given assumptions as "current daily volatilties" (see emphasized text below; which is solved above in the XLS snapshot on the column next to BT 502.2). I also cross-checked his usage in OFOD 10th edition and he similarly refers to these assumptions as "current daily volatilities." (e.g.,...
    Hi @Spinozzi That's a fair observation. I did parrot Hull's language here, such that he does refer to these given assumptions as "current daily volatilties" (see emphasized text below; which is solved above in the XLS snapshot on the column next to BT 502.2). I also cross-checked his usage in OFOD 10th edition and he similarly refers to these assumptions as "current daily volatilities." (e.g.,...
    Hi @Spinozzi That's a fair observation. I did parrot Hull's language here, such that he does refer to these given assumptions as "current daily volatilties" (see emphasized text below; which is solved above in the XLS snapshot on the column next to BT 502.2). I also cross-checked his usage in...
    Hi @Spinozzi That's a fair observation. I did parrot Hull's language here, such that he does refer to these given assumptions as "current daily volatilties" (see emphasized text below; which is...
    Replies:
    23
    Views:
    599
  19. David Harper CFA FRM

    L1.T2.104 Exponentially weighted moving average (EWMA) (Hull)

    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But...
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question...
    Replies:
    27
    Views:
    536
  20. Nicole Seaman

    Quiz-T2 P1.T2.405. Distributions I

    Hi @otcfin Per this recent thread here is my summary table on the use of normal Z versus student's t, the choice essentially depends on whether we know the population variance (i.e., known variance justifies the normal, but unknown variance estimates the population by assuming the sample variance which consumes a d.f. and warrants the more conservative student's t): Re t-statistic degrees...
    Hi @otcfin Per this recent thread here is my summary table on the use of normal Z versus student's t, the choice essentially depends on whether we know the population variance (i.e., known variance justifies the normal, but unknown variance estimates the population by assuming the sample variance which consumes a d.f. and warrants the more conservative student's t): Re t-statistic degrees...
    Hi @otcfin Per this recent thread here is my summary table on the use of normal Z versus student's t, the choice essentially depends on whether we know the population variance (i.e., known variance justifies the normal, but unknown variance estimates the population by assuming the sample...
    Hi @otcfin Per this recent thread here is my summary table on the use of normal Z versus student's t, the choice essentially depends on whether we know the population variance (i.e., known...
    Replies:
    18
    Views:
    460
  21. Suzanne Evans

    P1.T2.311. Probability Distributions III, Miller

    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
    Replies:
    25
    Views:
    453
  22. Suzanne Evans

    P1.T2.208. Sample mean estimators (Stock & Watson)

    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Replies:
    33
    Views:
    426
  23. Nicole Seaman

    P1.T2.500. Bayes theorem (Miller)

    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Replies:
    25
    Views:
    414
  24. David Harper CFA FRM

    L1.T2.103 Weighting schemes to estimate volatility (Hull)

    Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
    Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
    Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so...
    Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied...
    Replies:
    11
    Views:
    400
  25. David Harper CFA FRM

    L1.T2.108 Volatility forecast with GARCH(1,1) (Hull)

    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e.,...
    Replies:
    26
    Views:
    400
  26. Suzanne Evans

    P1.T2.204. Joint, marginal, and conditional probability functions (Stock & Watson)

    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) =...
    Replies:
    10
    Views:
    384
  27. Nicole Seaman

    Quiz-T2 P1.T2.403. Probabilities

    Hi @a_ishrat1973 See below. You can get it two ways, but either requires a secondary function (the function directly above a primary function). As X! represents factorial, and you get 184,756 with 20[2nd][×]÷(10[2nd][×][x^2]) = , or 20[2nd][×]÷10[2nd][×]÷10[2nd][×] = Or you can just use the built-in combination (secondary) function above the [+] key, such that [nCr] = [2nd][+] and you...
    Hi @a_ishrat1973 See below. You can get it two ways, but either requires a secondary function (the function directly above a primary function). As X! represents factorial, and you get 184,756 with 20[2nd][×]÷(10[2nd][×][x^2]) = , or 20[2nd][×]÷10[2nd][×]÷10[2nd][×] = Or you can just use the built-in combination (secondary) function above the [+] key, such that [nCr] = [2nd][+] and you...
    Hi @a_ishrat1973 See below. You can get it two ways, but either requires a secondary function (the function directly above a primary function). As X! represents factorial, and you get 184,756 with 20[2nd][×]÷(10[2nd][×][x^2]) = , or 20[2nd][×]÷10[2nd][×]÷10[2nd][×] = Or you can just use the...
    Hi @a_ishrat1973 See below. You can get it two ways, but either requires a secondary function (the function directly above a primary function). As X! represents factorial, and you get 184,756...
    Replies:
    13
    Views:
    381
  28. David Harper CFA FRM

    L1.T2.109 EWMA covariance (Hull)

    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Replies:
    9
    Views:
    378
  29. Nicole Seaman

    P1.T2.501. More Bayes Theorem (Miller)

    great - thanks again
    great - thanks again
    great - thanks again
    great - thanks again
    Replies:
    16
    Views:
    378
  30. Nicole Seaman

    P1.T2.314. Miller's one- and two-tailed hypotheses

    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the 16.36%. This: (16.36% - 20.00%)/(15.00% - 20.00%) = 0.728 gives us the fraction of green to blue...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%. Visually, we want the (unseen) value in the yellow cell, which is (so to speak) directly below the...
    Hi @hellohi It's called linear interpolation, please see And hopefully my picture below will help. Your table only gives us values at 20% and 15%, but we want the value associated with 16.36%....
    Replies:
    18
    Views:
    376

Thread Display Options

Loading...