P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views ↓
Last Message
  1. Suzanne Evans

    P1.T2.300. Probability functions (Miller)

    Hi Nicole, Thanks for this. I'm still getting familiar with the forum and I didn't realize there were 3 additional pages of posts. I'll look through this. I'm finding that I am spending substantially more time trying to figure out the questions than I am actually studying the text material. Hopefully that makes sense. 300 hours of studying expected to pass level 1. 150 hours or more...
    Hi Nicole, Thanks for this. I'm still getting familiar with the forum and I didn't realize there were 3 additional pages of posts. I'll look through this. I'm finding that I am spending substantially more time trying to figure out the questions than I am actually studying the text material. Hopefully that makes sense. 300 hours of studying expected to pass level 1. 150 hours or more...
    Hi Nicole, Thanks for this. I'm still getting familiar with the forum and I didn't realize there were 3 additional pages of posts. I'll look through this. I'm finding that I am spending substantially more time trying to figure out the questions than I am actually studying the text...
    Hi Nicole, Thanks for this. I'm still getting familiar with the forum and I didn't realize there were 3 additional pages of posts. I'll look through this. I'm finding that I am spending...
    Replies:
    78
    Views:
    2,335
  2. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,210
  3. David Harper CFA FRM

    P1.T2.202. Variance of sum of random variables

    Thanks David for the detailed explanation!
    Thanks David for the detailed explanation!
    Thanks David for the detailed explanation!
    Thanks David for the detailed explanation!
    Replies:
    57
    Views:
    1,081
  4. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    45
    Views:
    1,042
  5. asocialnot

    Question 202.2: Variance of sum of random variables

    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a Bernoulli, we know the variance = 96%*4% = 3.840%. Consistent with the worked solution, then: Variance...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a Bernoulli, we know the variance = 96%*4% = 3.840%. Consistent with the worked solution, then: Variance...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but...
    Replies:
    1
    Views:
    1,029
  6. Suzanne Evans

    P1.T2.209 T-statistic and confidence interval

    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Replies:
    51
    Views:
    955
  7. Suzanne Evans

    P1.T2.303 Mean and variance of continuous probability density functions (pdf)

    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which...
    Replies:
    49
    Views:
    954
  8. Nicole Seaman

    P1.T2.312. Mixture distributions

    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
    Replies:
    43
    Views:
    941
  9. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    857
  10. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    825
  11. Fran

    P1.T2.301. Miller's probability matrix

    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve...
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not...
    Replies:
    23
    Views:
    813
  12. Fran

    P1.T2.307. Skew and Kurtosis (Miller)

    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    Replies:
    30
    Views:
    798
  13. chris.leupold@baml.com

    question on: 208.3.C and 202.5

    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will...
    Replies:
    11
    Views:
    792
  14. David Harper CFA FRM

    L1.T2.111 Binomial & Poisson

    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean =...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And,...
    Replies:
    44
    Views:
    783
  15. Fran

    P1.T2.304. Covariance (Miller)

    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is...
    @omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or...
    Replies:
    27
    Views:
    663
  16. Suzanne Evans

    P1.T2.212. Difference between two means

    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    Replies:
    34
    Views:
    631
  17. Fran

    P1.T2.305. Minimum variance hedge (Miller)

    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
    Replies:
    21
    Views:
    609
  18. Suzanne Evans

    P1.T2.206. Variance of sample average

    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    Replies:
    20
    Views:
    608
  19. LL

    209.1

    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Replies:
    2
    Views:
    565
  20. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    550
  21. Fran

    P1.T2.306. Calculate the mean and variance of sums of variables.

    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented...
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a...
    Replies:
    33
    Views:
    550
  22. David Harper CFA FRM

    L1.T2.104 Exponentially weighted moving average (EWMA)

    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But...
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question...
    Replies:
    27
    Views:
    527
  23. LL

    63.1

    Thanks David!
    Thanks David!
    Thanks David!
    Thanks David!
    Replies:
    12
    Views:
    477
  24. Nicole Seaman

    P1.T2.405. Distributions I

    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then...
    Replies:
    16
    Views:
    428
  25. Suzanne Evans

    P1.T2.311. Probability Distributions III, Miller

    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
    Replies:
    25
    Views:
    415
  26. Suzanne Evans

    P1.T2.208. Sample mean estimators (Stock & Watson)

    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Replies:
    21
    Views:
    410
  27. David Harper CFA FRM

    L1.T2.108 Volatility forecast with GARCH(1,1)

    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e.,...
    Replies:
    26
    Views:
    395
  28. Nicole Seaman

    P1.T2.500. Bayes theorem

    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Replies:
    25
    Views:
    375
  29. Suzanne Evans

    P1.T2.204. Joint, marginal, and conditional probability functions (Stock & Watson)

    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) =...
    Replies:
    10
    Views:
    374
  30. David Harper CFA FRM

    L1.T2.109 EWMA covariance

    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Replies:
    9
    Views:
    373

Thread Display Options

Loading...