P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
  1. David Harper CFA FRM

    P1.T2.201. Random variables (Stock & Watson)

    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find...
    Replies:
    14
    Views:
    352
  2. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,247
  3. Suzanne Evans

    P1.T2.203. Skew and kurtosis (Stock & Watson)

    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered...
    Replies:
    10
    Views:
    280
  4. Fran

    P1.T2.306. Calculate the mean and variance of sums of variables. (Miller)

    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented elsewhere with identical meaning (eg, Malz Chapter 8) as: a(i) = β(i)*m + sqrt[1-β(i)^2]*e(i)
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a correct version), it should be: r(i) = a(i)*F + sqrt[1-a(i)^2]*e(i); which is also represented...
    Hi @jacek Yes, thank you, that is our typo. We appreciate that you posted the feedback. We will fix this. @Nicole Seaman she is correct (let me put that another way: question 306.1 above has a...
    Replies:
    33
    Views:
    572
  5. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
    Replies:
    0
    Views:
    40
  6. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models (Hull)

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    562
  7. Fran

    P1.T2.302. Bayes' Theorem (Miller)

    thank you!!
    thank you!!
    thank you!!
    thank you!!
    Replies:
    11
    Views:
    330
  8. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model....
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model...
    Replies:
    0
    Views:
    32
  9. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    23
  10. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
    Replies:
    0
    Views:
    41
  11. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    48
    Views:
    1,082
  12. Nicole Seaman

    Quiz-T2 P1.T2.407. Univariate linear regression

    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Replies:
    12
    Views:
    220
  13. Nicole Seaman

    Quiz-T2 P1.T2.405. Distributions I

    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then...
    Replies:
    16
    Views:
    432
  14. Suzanne Evans

    P1.T2.214. Regression lines (Stock & Watson)

    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore,...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x);...
    Replies:
    13
    Views:
    250
  15. Suzanne Evans

    P1.T2.222. Homoskedasticity-only F-statistic (Stock & Watson)

    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    Replies:
    14
    Views:
    319
  16. David Harper CFA FRM

    L1.T2.93 Jarque-Bera (Gujarati)

    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software)...
    Replies:
    14
    Views:
    151
  17. Suzanne Evans

    P1.T2.221. Joint null hypothesis in multiple OLS regression (Stock & Watson)

    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    it went out of my head at that very moment, thanks.
    Replies:
    14
    Views:
    324
  18. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    863
  19. Nicole Seaman

    P1.T2.508. Wold's theorem (Diebold)

    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    Replies:
    4
    Views:
    219
  20. David Harper CFA FRM

    L1.T2.94 Forecasting (prediction) error (Gujarati)

    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank...
    Replies:
    2
    Views:
    83
  21. Suzanne Evans

    P1.T2.216. Regression sums of squares: ESS, SSR, and TSS (Stock & Watson)

    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so dollars^2 still looks okay to me. As SER, on the other hand, is back to dollars. To tell you the truth, the...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the observational units are dollars, the regression squared sums (i.e., TSS and RSS) are units-squared, so...
    Hi [USER=42750]@ Maybe my notation isn't typical here, come to think of it, but ESS, TSS and RSS are all units squared. They are very much like variances. So in 216, for example, as the...
    Replies:
    13
    Views:
    259
  22. Nicole Seaman

    PQ-T2 P1.T2.319. Probabilities (Topic Review)

    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 =...
    Replies:
    11
    Views:
    318
  23. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    832
  24. Nicole Seaman

    PQ-T2 P1.T2.322. Multivariate linear regression (topic review)

    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values...
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error =...
    Replies:
    6
    Views:
    159
  25. David Harper CFA FRM

    L1.T1.92 Coefficients of determination and correlation (Gujarati)

    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an independent plus a dependent variables (ie, two variables including the dependent). In retrospect, this is...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as two-variable regressions because in the univariate regression, y(i) = a(0) + β(1)*X(1) there is an...
    @Angelinelyt These regression questions were written based on a previous author (Gujarati who preceded Stock and Watson) in quantitative methods. He referred to univariate regressions as...
    Replies:
    9
    Views:
    107
  26. David Harper CFA FRM

    L1.T2.124 Exponential versus Poisson (Rachov)

    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Yes, thank you @AGM777 for the correction to my mistake (note: thread post mistake only, no change to source Q&A)
    Replies:
    14
    Views:
    208
  27. David Harper CFA FRM

    L1.T2.85 Sample regression function (SRF) (Gujarati)

    Thanks David.
    Thanks David.
    Thanks David.
    Thanks David.
    Replies:
    7
    Views:
    74
  28. David Harper CFA FRM

    L1.T2.89 OLS standard errors (Gujarati)

    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression model (CLRM)" or, less cheeky, "Assuming homoskedastic errors per the Gauss-Markov Theorem ..." In...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it), new questions probably should attach a clarification such as "Assuming a classical linear regression...
    Hi @kik92 It's a fair question. Although the FRM exam has yet (to my knowledge) to explicate the implicit assumption of homoscedasticity (i.e., the typical regression question simply assumes it),...
    Replies:
    11
    Views:
    186
  29. Nicole Seaman

    PQ-T2 P1.T2.324. Estimating volatility (Topic Review)

    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock. For the garch(1,1) model the key statistic is the sum of the two main parameters (alpha1 and beta1,...
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ie, "The persistence of a garch model has to do with how fast large volatilities decay after a shock....
    Hi @Srilakshmi Yes, you are exactly correct. In question 324.1, GARCH persistence = α + β = 0.06 + 0.82 = 0.880. And this has (had) a source and it occasionally used this way. For example, see ...
    Replies:
    7
    Views:
    264
  30. Suzanne Evans

    Question 77: P value

    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel, but can be achieved via lookup) is given by F.DIST.RT(1.44, 29 df, 29 df) = 0.165836; i.e., the area...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio = variance (larger)/variance(smaller), here the F ratio = 0.12^2/0.10^2 = 1.44 and the p-value (in Excel,...
    Hi @SAhmed Apologies that even I can't find the link, this is an old question. It's looking for the F test of equality of variances (based on previously assigned Gujarati) So per F ratio =...
    Replies:
    3
    Views:
    29

Thread Display Options

Loading...