P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
1. P1.T2.304. Covariance (Miller)

@omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
@omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is analogous to the x*f(x) inside the summation. See below. Rather than sum the (X+1)^2 values to get 90...
@omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or mean) is similar: the continuous' integrand (ie, the term inside the integral) of x*f(x)*dx is...
@omar72787 Question 303.2 concerns a continuous probability function, as opposed to the discrete probability function assumed in the (above) 304.3. But the expected value (aka, weighted average or...
Fran ... 2
Replies:
27
Views:
716
2. PQ-T2P1.T2.323. Monte Carlo Simulation and GBM (Topic Review)

Oh yah my bad. I must have overlooked something. Thanks for clarification.
Oh yah my bad. I must have overlooked something. Thanks for clarification.
Oh yah my bad. I must have overlooked something. Thanks for clarification.
Oh yah my bad. I must have overlooked something. Thanks for clarification.
Replies:
8
Views:
301
3. P1.T2.315. Miller's hypothesis tests, continued

Hi, first of all let us note down the following definition of Chebyshev's Inequality (keyword: at least) The proportion of observations within k standard deviations of the arithmetic mean is at least 1-1/k^2. [k must be > 1] It always computes the minimum (at least) which must lie within an interval! You will basically see two examples where Chebyshev's Inequality is applied. Let's run...
Hi, first of all let us note down the following definition of Chebyshev's Inequality (keyword: at least) The proportion of observations within k standard deviations of the arithmetic mean is at least 1-1/k^2. [k must be > 1] It always computes the minimum (at least) which must lie within an interval! You will basically see two examples where Chebyshev's Inequality is applied. Let's run...
Hi, first of all let us note down the following definition of Chebyshev's Inequality (keyword: at least) The proportion of observations within k standard deviations of the arithmetic mean is at least 1-1/k^2. [k must be > 1] It always computes the minimum (at least) which must lie within an...
Hi, first of all let us note down the following definition of Chebyshev's Inequality (keyword: at least) The proportion of observations within k standard deviations of the arithmetic mean is at...
Replies:
12
Views:
240
4. L1.T2.103 Weighting schemes to estimate volatility (Hull)

Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so I would not say--and hopefully did not anywhere say something like "what's lambda for EWMA is...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied it below. In this way, GARCH β is analogous to EWMA λ; and GARCH α is analogous to EWMA's (1-λ) so...
Hi @s3filin Great question and, yes, I am indeed saying that "Beta [in GARCH] is a decay factor and is analogous to lambda in EWMA." Hull actually shows this specifically in Chapter 23.4; I copied...
Replies:
11
Views:
409
5. P1.T2.305. Minimum variance hedge (Miller)

What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
What a sigh of relief this is, @David Harper CFA FRM! Otherwise I would have been regarded as a complete idiot. Thanks for the confirmation!
Fran ... 2
Replies:
21
Views:
702
6. PQ-T2P1.T2.318. Distributional moments (Topic review)

Indeed. Yes, Excel uses the long formula for smaller samples when calculating SKEW. Thank you for this, @David Harper CFA FRM !
Indeed. Yes, Excel uses the long formula for smaller samples when calculating SKEW. Thank you for this, @David Harper CFA FRM !
Indeed. Yes, Excel uses the long formula for smaller samples when calculating SKEW. Thank you for this, @David Harper CFA FRM !
Indeed. Yes, Excel uses the long formula for smaller samples when calculating SKEW. Thank you for this, @David Harper CFA FRM !
Replies:
12
Views:
223
7. P1.T2.303 Mean and variance of continuous probability density functions (pdf) (Miller)

Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which explains the mean and variance:
Hi @chintanudeshi To retrieve the mean of a continuous probability distribution, we integrate x*f(x) over the probability domain. This is calculus, could i refer you to this terrific video which...
Replies:
49
Views:
1,060
8. PQ-T2P1.T2.316. Discrete distributions (Topic review)

Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." Thanks!
Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." Thanks!
Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an (jump-up) increase." Thanks!
Hi @FRM Mark It's given in the assumption "316.3. The current price of an asset is S(0) and its future evolution is modeled with a binomial tree. At each node, there is a 62% probability of an...
Replies:
7
Views:
243
9. P1.T2.311. Probability Distributions III, Miller

Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the unusual assumption. But it's super-super-easy to generate non-correlated normals, so the point is to...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo Simulation; it's almost not too much to say that independence (i.e., zero correlation) would be the...
Hi @s3filin This is a typical Monte Carlo assumption: that certain risk factors are (at least a little bit) correlated. This would be used any time we want correlated normals in a Monte Carlo...
Replies:
25
Views:
461
10. P1.T2.701. Regression analysis to model seasonality (Diebold)

Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
Replies:
11
Views:
100
11. P1.T2.705. Correlation (Hull)

Thank you emilioalzamora and David for such a detailed explanation.
Thank you emilioalzamora and David for such a detailed explanation.
Thank you emilioalzamora and David for such a detailed explanation.
Thank you emilioalzamora and David for such a detailed explanation.
Replies:
13
Views:
172
12. P1.T2.506. Covariance stationary time series (Diebold)

Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope...
Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation,...
Replies:
6
Views:
189
13. P1.T2.210. Hypothesis testing (Stock & Watson)

Thanks a lot David!!!!
Thanks a lot David!!!!
Thanks a lot David!!!!
Thanks a lot David!!!!
Replies:
13
Views:
243
14. L1.T2.111 Binomial & Poisson (Rachev)

Hi @s3filin It's a terrific observation The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
Hi @s3filin It's a terrific observation The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean = 1%*500, pmf - false) = 17.54673698%. Their cumulative (CDF) is even closer: =BINOM.DIST(X = 5,...
Hi @s3filin It's a terrific observation The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And, indeed: =BINOM.DIST(X = 5, trials = 500, p = 1%, pmf = false) = 17.63510451%, and =POISSON.DIST(X = 5, mean =...
Hi @s3filin It's a terrific observation The Poisson can approximate the binomial (see which applies when n*p is low; in this case n*p is not super low but it's getting there). And,...
Replies:
44
Views:
795
15. P1.T2.307. Skew and Kurtosis (Miller)

OK... That's clear now. Thanks a lot David and Ami44.
OK... That's clear now. Thanks a lot David and Ami44.
OK... That's clear now. Thanks a lot David and Ami44.
OK... That's clear now. Thanks a lot David and Ami44.
Fran ... 2
Replies:
30
Views:
872
16. L1.T2.67 Sample variance, covariance, skew, kurtosis (Gujarati)

@vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations. Here is how your approach could be justified: If instead of a sample of five observations n = {1, 2,...
@vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations. Here is how your approach could be justified: If instead of a sample of five observations n = {1, 2,...
@vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g., you are assuming that E(Y^2) is given by 103/5 = 2.96, but this is just a sample of 5 observations....
@vsrivast You can use E(Y^2) - E(Y) but then you are implicitly retrieving a population (not a sample) variance because you are assuming these sample statistics characterize the population; e.g.,...
Replies:
8
Views:
114
17. L1.T2.99 Bootstrap method (Jorion)

HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage to a sample size less than m. If we are using a historical window of 250 days, why would we...
HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage to a sample size less than m. If we are using a historical window of 250 days, why would we...
HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is intuitively: If our historical vector is n factors * m days (e.g., 250 days), I do not see an advantage...
HI @nielsklaver Okay I see your question, and mathematically, I am not sure (and I do not recall any specific reference) but, if we are mindful of the mechanics here, then what I would say is...
Replies:
8
Views:
123
18. P1.T2.201. Random variables (Stock & Watson)

Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in this specific question set, you will find that they are under Miller, Chapter 2: Probabilities in...
Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find specific concepts and learning objectives. For example, when you search for the learning objectives in...
Hello In case it helps in the future, this curriculum analysis spreadsheet shows all of the changes in the FRM curriculum from year to year: . You can easily use the search function to find...
Replies:
14
Views:
369
19. P1.T2.309. Probability Distributions I, Miller Chapter 4

Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function Thanks!
Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function Thanks!
Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) = 0.180. I'm insecure, I like to check it with the Excel function Thanks!
Hi @s3filin Yes, exactly. I think your phrasing is spot-on! As phrased, the answer should be the same 18.00% which I do also get with =C(100,95)*.95^95*.05^5 = BINOM.DIST(95, 100, 0.95, false) =...
Pam Gordon ... 2 3
Replies:
55
Views:
1,320
20. P1.T2.203. Skew and kurtosis (Stock & Watson)

Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx". I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx". I mention that only because I would not write this question today; nevermind it is actually based on an old GARP exam question. Today, I agree with you fully about this. I view kurtosis...
Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered "7xx". I mention that only because I would not write this question today; nevermind it is...
Thanks you are too kind @jacek Please, no offense taken! I am grateful for your attention. Question 203.3 was written in 2012 (is the meaning of "2xx" just FYI; this year's questions are numbered...
Replies:
10
Views:
289
21. P1.T2.707. Gaussian Copula (Hull)

Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
Replies:
0
Views:
55

thank you!!
thank you!!
thank you!!
thank you!!
Replies:
11
Views:
339
23. P1.T2.702. Simple (equally weighted) historical volatility (Hull)

Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
Replies:
0
Views:
33
24. P1.T2.700. Seasonality in time series analysis (Diebold)

Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
Replies:
0
Views:
54
25. P1.T2.310. Probability Distributions II, Miller Chapter 4

Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we observe a difference of 18.0%, so we want Pr[observed - µ[diff]/σ. Thanks,
Hi @sandra1122 We are told that E(A) = +10% and E(B) = +20%, so the null is an expected difference of 10% = E[µ(A) -µ(B)] = µ[difference] = +10%. And we are looking for the probability that we...
Pam Gordon ... 2 3
Replies:
48
Views:
1,149
26. Quiz-T2P1.T2.407. Univariate linear regression

Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
Replies:
12
Views:
236
27. P1.T2.214. Regression lines (Stock & Watson)

Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore, β(i,M) = cov(i,M)/σ(M)^2 = [σ(i)*σ(M)*ρ(i,M)]/σ(M)^2 and we can cancel one StdDev such that =...
Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x); which you can see from above because ρ(x,x) = 1.0 so that cov(x,x) = σ(x)*σ(x)*1.0 = σ(x)^2 Therefore,...
Hi Ben (@ohmanb ) Yes, it's foundational! You really just need these cov(x,y) = standard_deviation(x)*standard_deviation(y)*correlation(x,y) = σ(x)*σ(y)*ρ(x,y) covariance(x,x) = variance(x);...
Replies:
13
Views:
266
28. P1.T2.222. Homoskedasticity-only F-statistic (Stock & Watson)

@uness_o7 brilliant, I just did not see it. Thank you!
@uness_o7 brilliant, I just did not see it. Thank you!
@uness_o7 brilliant, I just did not see it. Thank you!
@uness_o7 brilliant, I just did not see it. Thank you!
Replies:
14
Views:
326
29. L1.T2.93 Jarque-Bera (Gujarati)

I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock returns). Much to my surprise the JB-test is not a mandatory reading of the FRM (apparently it has been removed...
I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software) and to understand (not only for testing regression residuals, but also for simple stock...
I would like to dig a bit deeper in the theory about the Jarque-Bera (JB) Test because it is a very useful test and what is more, it is very easy to implement (without using econometric software)...
Replies:
14
Views:
163
30. P1.T2.504. Copulas (Hull)

Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
Replies:
25
Views:
963