P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
  1. Suzanne Evans

    P1.T2.303 Mean and variance of continuous probability density functions (pdf) (Miller)

    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
    Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which...
    Replies:
    49
    Views:
    1,035
  2. Nicole Seaman

    PQ-T2 P1.T2.316. Discrete distributions (Topic review)

    Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." :) Thanks!
    Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." :) Thanks!
    Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." :) Thanks!
    Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an...
    Replies:
    7
    Views:
    227
  3. Suzanne Evans

    P1.T2.209 T-statistic and confidence interval (Stock & Watson)

    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Thanks a lot!
    Replies:
    53
    Views:
    1,018
  4. Suzanne Evans

    P1.T2.311. Probability Distributions III, Miller

    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
    Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
    Replies:
    25
    Views:
    453
  5. Nicole Seaman

    P1.T2.701. Regression analysis to model seasonality (Diebold)

    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
    Replies:
    11
    Views:
    93
  6. Nicole Seaman

    P1.T2.705. Correlation (Hull)

    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Replies:
    13
    Views:
    155
  7. Nicole Seaman

    P1.T2.506. Covariance stationary time series (Diebold)

    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation,...
    Replies:
    6
    Views:
    182
  8. Suzanne Evans

    P1.T2.210. Hypothesis testing (Stock & Watson)

    Thanks a lot David!!!!:)
    Thanks a lot David!!!!:)
    Thanks a lot David!!!!:)
    Thanks a lot David!!!!:)
    Replies:
    13
    Views:
    241
  9. David Harper CFA FRM

    L1.T2.111 Binomial & Poisson (Rachev)

    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean =...
    Hi @s3filin It's a terrific observation :cool: The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And,...
    Replies:
    44
    Views:
    793
  10. Fran

    P1.T2.307. Skew and Kurtosis (Miller)

    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    OK... That's clear now. Thanks a lot David and Ami44.
    Replies:
    30
    Views:
    857
  11. David Harper CFA FRM

    L1.T2.67 Sample variance, covariance, skew, kurtosis (Gujarati)

    @vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations. Here is how your approach could be justified: If instead of a sample of five observations n = {1, 2,...
    @vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations. Here is how your approach could be justified: If instead of a sample of five observations n = {1, 2,...
    @vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations....
    @vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g.,...
    Replies:
    8
    Views:
    114
  12. David Harper CFA FRM

    L1.T2.99 Bootstrap method (Jorion)

    HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage to a sample size less than m. If we are using a historical window of 250 days, why would we...
    HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage to a sample size less than m. If we are using a historical window of 250 days, why would we...
    HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage...
    HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is...
    Replies:
    8
    Views:
    123
  13. David Harper CFA FRM

    P1.T2.201. Random variables (Stock & Watson)

    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in...
    Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find...
    Replies:
    14
    Views:
    362
  14. Pam Gordon

    P1.T2.309. Probability Distributions I, Miller Chapter 4

    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function ;) Thanks!
    Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
    Replies:
    55
    Views:
    1,294
  15. Suzanne Evans

    P1.T2.203. Skew and kurtosis (Stock & Watson)

    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx";). I mention that only because I would not write this question today; nevermind it is...
    Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered...
    Replies:
    10
    Views:
    287
  16. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
    Replies:
    0
    Views:
    50
  17. Fran

    P1.T2.302. Bayes' Theorem (Miller)

    thank you!!
    thank you!!
    thank you!!
    thank you!!
    Replies:
    11
    Views:
    335
  18. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    24
  19. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
    Replies:
    0
    Views:
    44
  20. Pam Gordon

    P1.T2.310. Probability Distributions II, Miller Chapter 4

    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
    Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
    Replies:
    48
    Views:
    1,129
  21. Nicole Seaman

    Quiz-T2 P1.T2.407. Univariate linear regression

    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Replies:
    12
    Views:
    228
  22. Suzanne Evans

    P1.T2.214. Regression lines (Stock & Watson)

    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore,...
    Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x);...
    Replies:
    13
    Views:
    261
  23. Suzanne Evans

    P1.T2.222. Homoskedasticity-only F-statistic (Stock & Watson)

    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    @uness_o7 brilliant, I just did not see it. Thank you!
    Replies:
    14
    Views:
    323
  24. David Harper CFA FRM

    L1.T2.93 Jarque-Bera (Gujarati)

    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock...
    I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software)...
    Replies:
    14
    Views:
    155
  25. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    887
  26. Nicole Seaman

    P1.T2.508. Wold's theorem (Diebold)

    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    Replies:
    4
    Views:
    227
  27. David Harper CFA FRM

    L1.T2.94 Forecasting (prediction) error (Gujarati)

    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank you!
    Hi @FRM The predictor variance (aka, forecasting or prediction error) is from previously assigned Gujarati, but is not longer assigned in P1.T2. Regressions, it's a bit too difficult. Sorry. Thank...
    Replies:
    2
    Views:
    84
  28. Nicole Seaman

    PQ-T2 P1.T2.319. Probabilities (Topic Review)

    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for the bond price to drop: From current $62.46 = $100/(1+4.000%)^12, Down to: $60.00 =...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 = 100/(1+y)^12, so (1+y)^12 = 100/60 and y = (100/60)^(1/12) - 1. This sets up the yield shock required for...
    Hi @Angelinelyt Under annual compounding, the price for this 12-year zero-coupon bond is given by P = 100/(1+y)^12. We want the yield that would imply the lower price, such that $60.00 =...
    Replies:
    11
    Views:
    319
  29. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    837
  30. Nicole Seaman

    PQ-T2 P1.T2.322. Multivariate linear regression (topic review)

    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values (which is all you have of course). So it's just rounding. I have tagged it for non-urgent revision....
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error = 0.021518 which gives t ratio of 25.35. Yours looks approximately correct for the displayed values...
    Hi @Aradhikka My apologies: the displayed values are rounded. The question is entirely realistic (based on actual dataset) such that the MEAL_PCT coefficient = -0.545566 and its standard error =...
    Replies:
    6
    Views:
    162

Thread Display Options

Loading...