# P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
1. ### P1.T2.700. Seasonality in time series analysis (Diebold)

@David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
@David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
@David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
@David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
Replies:
5
Views:
97
2. ### P1.T2.221. Joint null hypothesis in multiple OLS regression (Stock & Watson)

Hi @FRM candidate Correct. The regression's implied change in the dependent is per unit of the independent. In regard to lower bound only, then, +1.000 Δ to Lot implies +1,000 * 0.000764 (lower bound only) = 0.764 change in the dependent, which is "selling price (in $1,000s)", so that's$1,000 * 0.764 = $764.0. So this does look correct to me. Thanks, Hi @FRM candidate Correct. The regression's implied change in the dependent is per unit of the independent. In regard to lower bound only, then, +1.000 Δ to Lot implies +1,000 * 0.000764 (lower bound only) = 0.764 change in the dependent, which is "selling price (in$1,000s)", so that's $1,000 * 0.764 =$764.0. So this does look correct to me. Thanks,
Hi @FRM candidate Correct. The regression's implied change in the dependent is per unit of the independent. In regard to lower bound only, then, +1.000 Δ to Lot implies +1,000 * 0.000764 (lower bound only) = 0.764 change in the dependent, which is "selling price (in $1,000s)", so that's$1,000 *...
Hi @FRM candidate Correct. The regression's implied change in the dependent is per unit of the independent. In regard to lower bound only, then, +1.000 Δ to Lot implies +1,000 * 0.000764 (lower...
Replies:
18
Views:
393
3. ### P1.T2.505. Model selection criteria (Diebold)

"Hi, this is a very challenging topic but also an important one in the field of quant. analysis; I would like to give some background info about it before we can go into detail: All these so called goodness-of-fit model are based on the "Principle of Parsimony" (sometimes known as Occam’s razor named after an English philosopher: the principle is intuitive as it puts a premium on elegance....
"Hi, this is a very challenging topic but also an important one in the field of quant. analysis; I would like to give some background info about it before we can go into detail: All these so called goodness-of-fit model are based on the "Principle of Parsimony" (sometimes known as Occam’s razor named after an English philosopher: the principle is intuitive as it puts a premium on elegance....
"Hi, this is a very challenging topic but also an important one in the field of quant. analysis; I would like to give some background info about it before we can go into detail: All these so called goodness-of-fit model are based on the "Principle of Parsimony" (sometimes known as Occam’s...
"Hi, this is a very challenging topic but also an important one in the field of quant. analysis; I would like to give some background info about it before we can go into detail: All these so...
Replies:
6
Views:
365
4. ### P1.T2.717. Bayes' Theorem (Miller, Ch.6)

Hi @FRM candidate Because we still don't know if the model is good or bad ! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
Hi @FRM candidate Because we still don't know if the model is good or bad ! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
Hi @FRM candidate Because we still don't know if the model is good or bad ! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the...
Hi @FRM candidate Because we still don't know if the model is good or bad ! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good...
Replies:
2
Views:
52
5. ### P1.T2.214. Regression lines (Stock & Watson)

Thank you @David Harper CFA FRM. Yes I had read the answer above, but my mistake was confusing the Rf for the intercept. Thanks for clearing it up!
Thank you @David Harper CFA FRM. Yes I had read the answer above, but my mistake was confusing the Rf for the intercept. Thanks for clearing it up!
Thank you @David Harper CFA FRM. Yes I had read the answer above, but my mistake was confusing the Rf for the intercept. Thanks for clearing it up!
Thank you @David Harper CFA FRM. Yes I had read the answer above, but my mistake was confusing the Rf for the intercept. Thanks for clearing it up!
Replies:
19
Views:
327
6. ### P1.T2.719. One- versus two-tailed hypothesis tests (Miller Ch.7)

Thank you so very much @David Harper CFA FRM - Crystal clear now.
Thank you so very much @David Harper CFA FRM - Crystal clear now.
Thank you so very much @David Harper CFA FRM - Crystal clear now.
Thank you so very much @David Harper CFA FRM - Crystal clear now.
Replies:
8
Views:
91
7. ### P1.T2.314. Miller's one- and two-tailed hypotheses

Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Thank you @David Harper CFA FRM !
Replies:
26
Views:
482
8. ### P1.T2.718. Confidence in the mean and variance (Miller Ch.7)

Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796)...
Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were...
Replies:
2
Views:
46
9. ### P1.T2.707. Gaussian Copula (Hull)

Thank you David.
Thank you David.
Thank you David.
Thank you David.
Replies:
3
Views:
89
10. ### P1.T2.309. Probability Distributions I, Miller Chapter 4

@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on exam day, is there a way to think about this in a way that we can "quickly" understand that the 7/5...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on exam day, is there a way to think about this in a way that we can "quickly" understand that the 7/5...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement my question as follows: Is it even possible to do question without doing a binomial tree? I.e. on...
@David Harper CFA FRM - thank you very very much for such a detailed answer. Now that I understand the difference between event and outcome, or permutation vs. combination, allow me to supplement...
Pam Gordon ... 2 3
Replies:
59
Views:
1,529
11. ### PQ-T2P1.T2.322. Multivariate linear regression (topic review)

Hi @Jaskarn Not exactly, The confidence interval, CI = (sample statistic) +/- (standard error)*(critical value[as a function of confidence level]); where the most common example is: CI [mean] = sample mean +/- SE * (student's t critical value). But that's just the CI around a sample mean. Alternatively, we might carve out a CI around a VaR quantile, a sample variance, or other sample...
Hi @Jaskarn Not exactly, The confidence interval, CI = (sample statistic) +/- (standard error)*(critical value[as a function of confidence level]); where the most common example is: CI [mean] = sample mean +/- SE * (student's t critical value). But that's just the CI around a sample mean. Alternatively, we might carve out a CI around a VaR quantile, a sample variance, or other sample...
Hi @Jaskarn Not exactly, The confidence interval, CI = (sample statistic) +/- (standard error)*(critical value[as a function of confidence level]); where the most common example is: CI [mean] = sample mean +/- SE * (student's t critical value). But that's just the CI around a sample mean....
Hi @Jaskarn Not exactly, The confidence interval, CI = (sample statistic) +/- (standard error)*(critical value[as a function of confidence level]); where the most common example is: CI [mean] =...
Replies:
8
Views:
197
12. ### P1.T2.304. Covariance (Miller)

Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Ohh I was having this same doubt... thanks.
Fran ... 2
Replies:
28
Views:
786
13. ### P1.T2.305. Minimum variance hedge (Miller)

It absolutely helps. Thank you!
It absolutely helps. Thank you!
It absolutely helps. Thank you!
It absolutely helps. Thank you!
Fran ... 2
Replies:
24
Views:
800
14. ### L1.T2.86 OLS Regression (Gujarati)

Hi @Abiola This is an old question, admittedly, and it thusly associates to the then-prevailing Econometrics assignment (the fabulous Essentials of Econometrics by Gujarati ). I mention that because I'm not sure the technical detail queries by 86.5 is necessarily found (or at least easy to find) in the current Stock and Watson. In any case, the question is about two interesting features of an...
Hi @Abiola This is an old question, admittedly, and it thusly associates to the then-prevailing Econometrics assignment (the fabulous Essentials of Econometrics by Gujarati ). I mention that because I'm not sure the technical detail queries by 86.5 is necessarily found (or at least easy to find) in the current Stock and Watson. In any case, the question is about two interesting features of an...
Hi @Abiola This is an old question, admittedly, and it thusly associates to the then-prevailing Econometrics assignment (the fabulous Essentials of Econometrics by Gujarati ). I mention that because I'm not sure the technical detail queries by 86.5 is necessarily found (or at least easy to find)...
Hi @Abiola This is an old question, admittedly, and it thusly associates to the then-prevailing Econometrics assignment (the fabulous Essentials of Econometrics by Gujarati ). I mention that...
Replies:
8
Views:
120
15. ### P1.T2.216. Regression sums of squares: ESS, SSR, and TSS (Stock & Watson)

HI @verdi Yes, R^2 = 1 - SSR/TSS = ESS/TSS for univariate and multivariate (several regressors). However, because the R^2 tends to artificially increase as predictors (regressors) are added, the adjusted R^2 is better for multivariate; adjusted R^2 = 1 - (1 - R^2)*(n-1)/(n-k) where k is the total number of coefficients (incl intercept) or total number of variables (incl the dependent). Yes,...
HI @verdi Yes, R^2 = 1 - SSR/TSS = ESS/TSS for univariate and multivariate (several regressors). However, because the R^2 tends to artificially increase as predictors (regressors) are added, the adjusted R^2 is better for multivariate; adjusted R^2 = 1 - (1 - R^2)*(n-1)/(n-k) where k is the total number of coefficients (incl intercept) or total number of variables (incl the dependent). Yes,...
HI @verdi Yes, R^2 = 1 - SSR/TSS = ESS/TSS for univariate and multivariate (several regressors). However, because the R^2 tends to artificially increase as predictors (regressors) are added, the adjusted R^2 is better for multivariate; adjusted R^2 = 1 - (1 - R^2)*(n-1)/(n-k) where k is the...
HI @verdi Yes, R^2 = 1 - SSR/TSS = ESS/TSS for univariate and multivariate (several regressors). However, because the R^2 tends to artificially increase as predictors (regressors) are added, the...
Replies:
18
Views:
312
16. ### P1.T2.315. Miller's hypothesis tests, continued

Hi @lavi5h Yes, GARP is supposed to provide any required cumulative distribution function (CDF) lookup table (ie, source of critical values), although is most cases with respect to the student's t, they are likely to want you to use the normal Z-table to approximate (i.e., large sample test of sample mean), see
Hi @lavi5h Yes, GARP is supposed to provide any required cumulative distribution function (CDF) lookup table (ie, source of critical values), although is most cases with respect to the student's t, they are likely to want you to use the normal Z-table to approximate (i.e., large sample test of sample mean), see
Hi @lavi5h Yes, GARP is supposed to provide any required cumulative distribution function (CDF) lookup table (ie, source of critical values), although is most cases with respect to the student's t, they are likely to want you to use the normal Z-table to approximate (i.e., large sample test of...
Hi @lavi5h Yes, GARP is supposed to provide any required cumulative distribution function (CDF) lookup table (ie, source of critical values), although is most cases with respect to the student's...
Replies:
17
Views:
284
17. ### L1.T2.118. Student's t distribution (Rachev)

Hi Mathematically these solutions are non-trivial, to my knowledge. Here is the best source from my library (see page 141 on Student's t). To be candid, as much math training as I have, I wouldn't be able to derive these solutions on my own. When I need to remember, I'll use wikipedia Intuitively, of course, each the variance and excess variance is slightly greater than one. I hope that's a...
Hi Mathematically these solutions are non-trivial, to my knowledge. Here is the best source from my library (see page 141 on Student's t). To be candid, as much math training as I have, I wouldn't be able to derive these solutions on my own. When I need to remember, I'll use wikipedia Intuitively, of course, each the variance and excess variance is slightly greater than one. I hope that's a...
Hi Mathematically these solutions are non-trivial, to my knowledge. Here is the best source from my library (see page 141 on Student's t). To be candid, as much math training as I have, I wouldn't be able to derive these solutions on my own. When I need to remember, I'll use wikipedia ...
Hi Mathematically these solutions are non-trivial, to my knowledge. Here is the best source from my library (see page 141 on Student's t). To be candid, as much math training as I have, I...
Replies:
5
Views:
94
18. ### P1.T2.310. Probability Distributions II, Miller Chapter 4

Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case" where a = 1 and b = 1. In fact, this variance is a special case of the covariance and itself further...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case" where a = 1 and b = 1. In fact, this variance is a special case of the covariance and itself further...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y); by general-special, I just mean that your expression is the "special case"...
Hi @verdi Your expression, var(x+y)=varX+varY+2covXY, is correct of course. But its general form, if we include constants (aka, weights) of 'a' and 'b' is given by var(aX + bY) = a^2*var(X) +...
Pam Gordon ... 2 3
Replies:
50
Views:
1,288
19. ### P1.T2.307. Skew and Kurtosis (Miller)

Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the 1/3" with 1/n*Σ (xi - μ)^2 = (1/3)* [(1-3)^2 + (2-3)^2 +(6-3)^2] = 4.67. Thank you for your attention to...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the 1/3" with 1/n*Σ (xi - μ)^2 = (1/3)* [(1-3)^2 + (2-3)^2 +(6-3)^2] = 4.67. Thank you for your attention to...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) = 4.67, but since each of the outcomes is equally likely this is the same as "un-distributing the...
Hi @verdi Yes, nice catch of the typo (which I did miss). It should be either Σ [(xi - μ)^2 * pi] or 1/n*Σ (xi - μ)^2, as in Σ [(xi - μ)^2 * pi] = (1-3)^2*(1/3) + (2-3)^2*(1/3) +(6-3)^2*(1/3) =...
Fran ... 2
Replies:
32
Views:
979
20. ### P1.T2.303 Mean and variance of continuous probability density functions (pdf) (Miller)

Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Thanks David. I was struggling in the beginning, but after redoing it and tried to understand the steps, it became more logical
Replies:
50
Views:
1,127
21. ### P1.T2.300. Probability functions (Miller)

Hello @fccodart I just wanted to make sure that you read through all of the comments in this forum thread (there are 5 pages of discussions in this thread) to see if your question was already answered. The first question that was posted asks about the antiderivative formulas, and David...
Hello @fccodart I just wanted to make sure that you read through all of the comments in this forum thread (there are 5 pages of discussions in this thread) to see if your question was already...
Replies:
80
Views:
3,153
22. ### L1.T2.87 OLS regression interpretation (Gujarati)

Thank you David
Thank you David
Thank you David
Thank you David
Replies:
5
Views:
90
23. ### P1.T2.715.Chi-squared distribution, Student’s t, and F-distributions (Miller Ch.4)

Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables...
Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions,...
Replies:
0
Views:
38
24. ### P1.T2.213. Sample variance, covariance and correlation (Stock & Watson)

Hi David, thanks!
Hi David, thanks!
Hi David, thanks!
Hi David, thanks!
Replies:
10
Views:
235
25. ### P1.T2.712. Skew, kurtosis, coskew and cokurtosis (Miller, Chapter 3)

Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best...
Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical...
Replies:
0
Views:
46
26. ### P1.T2.711. Covariance and correlation (Miller, Ch.3)

Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an...
Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following...
Replies:
0
Views:
64
27. ### L1.T2.66 Skew & Kurtosis (Gujarati)

thanks for you reply
thanks for you reply
thanks for you reply
thanks for you reply
Replies:
9
Views:
147
28. ### P1.T2.202. Variance of sum of random variables (Stock & Watson)

Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a corollary of a rule that I've employed in reviewing, and giving feedback on, GARP's own practice...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a question that has two valid answers to the self-contained question. It's not a "best practice." It's a...
Hi @Arseniy Semiletenko Good point! In truth, it's a weakness of my question: I wrote this question 2012 (per the 2xx.x numbering) and, having improved my technique, I would not today write a...
Replies:
61
Views:
1,192
29. ### P1.T2.217. Regression coefficients (Stock & Watson)

Hi @Arseniy Semiletenko Good questions, they are related I think. Please note the general form of the test statistic of the regression coefficient is given by (b - β)/se(b) where (b) is the observed regression coefficient and β is the null hypothesis. So in 217.1, where the observed regression coefficient is 1.080 and the null, let's call it β = 1.0, we have the t-stat = (1.080 - 1.0)/SE(b)...
Hi @Arseniy Semiletenko Good questions, they are related I think. Please note the general form of the test statistic of the regression coefficient is given by (b - β)/se(b) where (b) is the observed regression coefficient and β is the null hypothesis. So in 217.1, where the observed regression coefficient is 1.080 and the null, let's call it β = 1.0, we have the t-stat = (1.080 - 1.0)/SE(b)...
Hi @Arseniy Semiletenko Good questions, they are related I think. Please note the general form of the test statistic of the regression coefficient is given by (b - β)/se(b) where (b) is the observed regression coefficient and β is the null hypothesis. So in 217.1, where the observed regression...
Hi @Arseniy Semiletenko Good questions, they are related I think. Please note the general form of the test statistic of the regression coefficient is given by (b - β)/se(b) where (b) is the...
Replies:
16
Views:
262
30. ### P1.T2.706. Bivariate normal distribution (Hull)

@David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
@David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
@David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
@David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
Replies:
8
Views:
126