P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views ↓
Last Message
  1. Suzanne Evans

    P1.T2.300. Probability functions (Miller)

    I noticed that people were stuck on the same step for each of the problems (anti-derivative), like I was - I recommend watching the follow youtube video because David's solutions certainly make sense for me now after watching the 10 min clip. So for Problem 300.1, you need to find the anti-derivative for (1/8)x-0.75. Here is my enhanced step by step for people that are struggling: 1....
    I noticed that people were stuck on the same step for each of the problems (anti-derivative), like I was - I recommend watching the follow youtube video because David's solutions certainly make sense for me now after watching the 10 min clip. So for Problem 300.1, you need to find the anti-derivative for (1/8)x-0.75. Here is my enhanced step by step for people that are struggling: 1....
    I noticed that people were stuck on the same step for each of the problems (anti-derivative), like I was - I recommend watching the follow youtube video because David's solutions certainly make sense for me now after watching the 10 min clip. So for Problem 300.1, you need to find the...
    I noticed that people were stuck on the same step for each of the problems (anti-derivative), like I was - I recommend watching the follow youtube video because David's solutions certainly make...
    Replies:
    69
    Views:
    2,081
  2. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,139
  3. asocialnot

    Question 202.2: Variance of sum of random variables

    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a Bernoulli, we know the variance = 96%*4% = 3.840%. Consistent with the worked solution, then: Variance...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a Bernoulli, we know the variance = 96%*4% = 3.840%. Consistent with the worked solution, then: Variance...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but for each of the random variables itself. For example, the first bond has PD = 4% and, as it is a...
    Hi asocialnot, Great question. Because 202.2 is looking for the variance of the sum of three random variables, each with its own distributional parameters. Your formula above, indeed works, but...
    Replies:
    1
    Views:
    1,029
  4. David Harper CFA FRM

    P1.T2.202. Variance of sum of random variables

    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the formula you have used. Sorry for the trouble.
    David, please ignore me. I figured it out - the beta is between G and the portfolio and not G with S so I worked out part of the covariance but not the full covariance. I now understand the...
    Replies:
    53
    Views:
    1,024
  5. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    45
    Views:
    972
  6. Nicole Seaman

    P1.T2.312. Mixture distributions

    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally reasonable question in my mind). Also, memorizing the most common z's will help you but I don't think...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would provide a snippet of the respective region of the z table. (I would add that this is a totally...
    Just to add a few more thoughts, the exam "could" ask you to use an obscure level of significance which would require you to retrieve a value from a z table. If this was the case, the exam would...
    Replies:
    43
    Views:
    900
  7. Suzanne Evans

    P1.T2.209 T-statistic and confidence interval

    Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail) characterizes the binomial distribution. The exam will also expect you to know the variance of a Bernoulli...
    Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail) characterizes the binomial distribution. The exam will also expect you to know the variance of a Bernoulli...
    Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR is either exceeded or not). Importantly, a series of i.i.d. Bernoulli variables (succeed/fail)...
    Hi @sandra1122 In a word, yes. The Bernoulli is a highly likely exam candidate because it characterizes default (i.e., either survive or default) and also VaR exceedence (on a given day, the VaR...
    Replies:
    44
    Views:
    896
  8. Suzanne Evans

    P1.T2.303 Mean and variance of continuous probability density functions (pdf)

    Hi @omar72787 not silly at all. First understand question was designed so that the function is a probability density function (i.e., ). Most functions won't be probabilities. A probability must have an area under the curve equal to 1.0 (100%). The area under a curve is found by "evaluating the definite integral;" this is phrase with lots of video help, if you google it; i think Krista King is...
    Hi @omar72787 not silly at all. First understand question was designed so that the function is a probability density function (i.e., ). Most functions won't be probabilities. A probability must have an area under the curve equal to 1.0 (100%). The area under a curve is found by "evaluating the definite integral;" this is phrase with lots of video help, if you google it; i think Krista King is...
    Hi @omar72787 not silly at all. First understand question was designed so that the function is a probability density function (i.e., ). Most functions won't be probabilities. A probability must have an area under the curve equal to 1.0 (100%). The area under a curve is found by "evaluating the...
    Hi @omar72787 not silly at all. First understand question was designed so that the function is a probability density function (i.e., ). Most functions won't be probabilities. A probability must...
    Replies:
    47
    Views:
    880
  9. chris.leupold@baml.com

    question on: 208.3.C and 202.5

    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will revise the PDFs, but I felt it more helpful currently to prioritize the 2 fresh mock exams). Thanks,
    Hi Chris, I think you are correct on both, can you see the source question thread @ i.e., you've identified two errors. I apologize they are not yet fixed in the PDF (like all errors, we will...
    Replies:
    11
    Views:
    792
  10. Fran

    P1.T2.301. Miller's probability matrix

    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve...
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not...
    Replies:
    23
    Views:
    781
  11. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    778
  12. David Harper CFA FRM

    L1.T2.111 Binomial & Poisson

    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean =...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And,...
    Replies:
    44
    Views:
    766
  13. Fran

    P1.T2.307. Skew and Kurtosis (Miller)

    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    Replies:
    30
    Views:
    742
  14. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    731
  15. Suzanne Evans

    P1.T2.212. Difference between two means

    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    That was a long message to type on a phone - got kind of tired towards the end!
    Replies:
    34
    Views:
    619
  16. Fran

    P1.T2.304. Covariance (Miller)

    Hi David, 1. In the above quote, why do you do the sum of f(x) for all intervals divided by 5? I did x*fx (like you did in the PDF). I am trying to figure out how you get 3 and 18 for E[X] and E[Y]. If you plug in 18 in y=(x+1)^2, you do not get 3. 2. Why do you divide by 5? There are many other BT problems that I did, in which all I had to do was do x*f(x) to get the mean.. not divide by n....
    Hi David, 1. In the above quote, why do you do the sum of f(x) for all intervals divided by 5? I did x*fx (like you did in the PDF). I am trying to figure out how you get 3 and 18 for E[X] and E[Y]. If you plug in 18 in y=(x+1)^2, you do not get 3. 2. Why do you divide by 5? There are many other BT problems that I did, in which all I had to do was do x*f(x) to get the mean.. not divide by n....
    Hi David, 1. In the above quote, why do you do the sum of f(x) for all intervals divided by 5? I did x*fx (like you did in the PDF). I am trying to figure out how you get 3 and 18 for E[X] and E[Y]. If you plug in 18 in y=(x+1)^2, you do not get 3. 2. Why do you divide by 5? There are many...
    Hi David, 1. In the above quote, why do you do the sum of f(x) for all intervals divided by 5? I did x*fx (like you did in the PDF). I am trying to figure out how you get 3 and 18 for E[X] and...
    Replies:
    26
    Views:
    616
  17. Suzanne Evans

    P1.T2.206. Variance of sample average

    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    Replies:
    20
    Views:
    593
  18. LL

    209.1

    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Thanks David ! This helps ! :) I went crazy figuring out how 27.8% is derived. Glad I asked :) You get a smile looking at my avatar. :D I hope I get a smile looking at my FRM result ! ;) :rolleyes:
    Replies:
    2
    Views:
    565
  19. Fran

    P1.T2.305. Minimum variance hedge (Miller)

    Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all $100.0 to some combination. That's what i meant by constraint. Question 305.2 instead starts with the...
    Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all $100.0 to some combination. That's what i meant by constraint. Question 305.2 instead starts with the...
    Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) = 100%. So it's like assuming you have $100.0 to allocate between the assets but you must allocate all...
    Hi @sandra1122 Question 305.1 is looking for the optimal (i.e., minimum variance) mix between (A) and (B) in a portfolio that has a total weight of 100.0% because it is based on w(a) + w(b) =...
    Replies:
    15
    Views:
    529
  20. David Harper CFA FRM

    L1.T2.104 Exponentially weighted moving average (EWMA)

    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But hopefully it won't be too much of an issue.
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question mentions neither, I think I shall plumb for the LN option as that just feels more "right" to me. But...
    @Deepak Chitnis and @David Harper CFA FRM CIPM thanks for your replies...I will make sure I keep a special eye out as to whether the question mentions simple vs LN returns. If the question...
    Replies:
    27
    Views:
    526
  21. Fran

    P1.T2.306. Calculate the mean and variance of sums of variables.

    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented...
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a...
    Replies:
    33
    Views:
    524
  22. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    482
  23. LL

    63.1

    Thanks David!
    Thanks David!
    Thanks David!
    Thanks David!
    Replies:
    12
    Views:
    477
  24. Nicole Seaman

    P1.T2.405. Distributions I

    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then...
    Replies:
    16
    Views:
    398
  25. Suzanne Evans

    P1.T2.208. Sample mean estimators (Stock & Watson)

    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Hi David, I was just referring to the previous discussion to give better understanding to my question:) Thanks a lot for your time and patience. Praveen
    Replies:
    21
    Views:
    397
  26. David Harper CFA FRM

    L1.T2.108 Volatility forecast with GARCH(1,1)

    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's why this older question of mine is clearly imprecise (sorry): the question needs to specify that...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e., 11.052/10 - 1 = 3.83%) then the 10-day volatility forecast is 2.429%, a difference of 0.066%. That's...
    Hi @Tania Pereira Right, either is acceptable and, in the case of question 108.3 above, it makes a difference: the given answer is 2.363% but if we instead computed a discrete daily return (i.e.,...
    Replies:
    26
    Views:
    391
  27. David Harper CFA FRM

    L1.T2.109 EWMA covariance

    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Hi @FM22 From Hull 23.7:
    Replies:
    9
    Views:
    373
  28. Suzanne Evans

    P1.T2.311. Probability Distributions III, Miller

    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
    Replies:
    25
    Views:
    370
  29. Suzanne Evans

    P1.T2.204. Joint, marginal, and conditional probability functions (Stock & Watson)

    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) =...
    Replies:
    10
    Views:
    366
  30. jakub

    Q 206: Variance of sample average

    Yes, only because we are concerned with the (sample) distribution of an AVERAGE return over 5 years. This variable itself (an average of 5 variables) has a standard deviation called the sample average (If you have Hull, it is really the same as his Example 14.3 in Chapter 5), thanks
    Yes, only because we are concerned with the (sample) distribution of an AVERAGE return over 5 years. This variable itself (an average of 5 variables) has a standard deviation called the sample average (If you have Hull, it is really the same as his Example 14.3 in Chapter 5), thanks
    Yes, only because we are concerned with the (sample) distribution of an AVERAGE return over 5 years. This variable itself (an average of 5 variables) has a standard deviation called the sample average (If you have Hull, it is really the same as his Example 14.3 in Chapter 5), thanks
    Yes, only because we are concerned with the (sample) distribution of an AVERAGE return over 5 years. This variable itself (an average of 5 variables) has a standard deviation called the sample...
    Replies:
    3
    Views:
    361

Thread Display Options

Loading...