P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
  1. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,210
  2. Suzanne Evans

    P1.T2.203. Skew and kurtosis (Stock & Watson)

    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered...
    Replies:
    10
    Views:
    276
  3. Fran

    P1.T2.306. Calculate the mean and variance of sums of variables.

    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented...
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a...
    Replies:
    33
    Views:
    550
  4. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
    Replies:
    0
    Views:
    35
  5. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    550
  6. Fran

    P1.T2.302. Bayes' Theorem (Miller)

    thank you!!
    thank you!!
    thank you!!
    thank you!!
    Replies:
    11
    Views:
    320
  7. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model....
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model...
    Replies:
    0
    Views:
    29
  8. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    20
  9. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
    Replies:
    0
    Views:
    39
  10. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    45
    Views:
    1,042
  11. Nicole Seaman

    P1.T2.407. Univariate linear regression

    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Replies:
    12
    Views:
    217
  12. Nicole Seaman

    P1.T2.405. Distributions I

    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then...
    Replies:
    16
    Views:
    428
  13. Suzanne Evans

    P1.T2.214. Regression lines (Stock & Watson)

    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore,...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x);...
    Replies:
    13
    Views:
    246
  14. uness_o7

    R16.P1.T2. Hull - expected value of u(n+t-1)^2

    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E(...
    Replies:
    1
    Views:
    17
  15. Suzanne Evans

    P1.T2.222. Homoskedasticity-only F-statistic

    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    Replies:
    14
    Views:
    315
  16. David Harper CFA FRM

    L1.T2.93 Jarque-Bera

    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software)...
    Replies:
    14
    Views:
    149
  17. Suzanne Evans

    P1.T2.221. Joint null hypothesis in multiple OLS regression

    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    Replies:
    14
    Views:
    319
  18. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    855
  19. Nicole Seaman

    P1.T2.508. Wold's theorem

    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    Replies:
    4
    Views:
    213
  20. David Harper CFA FRM

    L1.T2.94 Forecasting (prediction) error

    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank...
    Replies:
    2
    Views:
    81
  21. Suzanne Evans

    P1.T2.216. Regression sums of squares: ESS, SSR, and TSS

    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the...
    Replies:
    13
    Views:
    252
  22. Nicole Seaman

    PQ-T2 P1.T2.319. Probabilities (Topic Review)

    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 =...
    Replies:
    11
    Views:
    313
  23. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    825
  24. Fran

    P1.T2.301. Miller's probability matrix

    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve it by putting x = 6.
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not tricky to do as you know how to solve the green statement above after integrating xf(x), you can solve...
    For working out mean of f(x), we integrate xf(x) instead of just integrating f(x) like the green statement above. Integrating xf(x) is just integrating x*f(x), i.e. you have another x, so not...
    Replies:
    23
    Views:
    813
  25. Nicole Seaman

    PQ-T2 P1.T2.322. Multivariate linear regression (topic review)

    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values...
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error =...
    Replies:
    6
    Views:
    156
  26. David Harper CFA FRM

    L1.T1.92 Coefficients of determination and correlation

    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as...
    Replies:
    9
    Views:
    105
  27. David Harper CFA FRM

    L1.T2.124 Exponential versus Poisson

    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Replies:
    14
    Views:
    204
  28. David Harper CFA FRM

    L1.T2.85 Sample regression function (SRF)

    Thanks David.
    Thanks David.
    Thanks David.
    Thanks David.
    Replies:
    7
    Views:
    72
  29. David Harper CFA FRM

    L1.T2.89 OLS standard errors

    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it),...
    Replies:
    11
    Views:
    184
  30. Nicole Seaman

    PQ-T2 P1.T2.324. Estimating volatility (Topic Review)

    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock....
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ...
    Replies:
    7
    Views:
    262

Thread Display Options

Loading...