1)For a sample of 400 firms, the relationship between corporate revenue (Yi) and the average years of experience per employee (Xi) is modeled as follows:

Yi = β1 + β2 Xi + εi, i = 1, 2,...,400

You wish to test the joint null hypothesis that β1 = 0 and β2 = 0 at the 95% confidence level. The p-value for the t-statistic for β1 is 0.07, and the p-value for the t-statistic for β2 is 0.06. The p-value for the F-statistic for the regression is 0.045. Which of the following statements is correct?

a. You can reject the null hypothesis because each β is different from 0 at the 95% confidence level.
b. You cannot reject the null hypothesis because neither β is different from 0 at the 95% confidence level.
c. You can reject the null hypothesis because the F-statistic is significant at the 95% confidence level.
d. You cannot reject the null hypothesis because the F-statistic is not significant at the 95% confidence level.

Correct answer: c

Explanation: The T-test would not be sufficient to test the joint hypothesis. In order to test the joint null hypothesis, examine the F-statistic, which in this case is statistically significant at the 95% confidence level. Thus the null can be rejected.
Could someone please explain this, how do we solve such questions without a table if it appears this way in the exam?

GARP 2013


2) You built a linear regression model to analyze annual salaries for a developed country. You incorporated two independent variables, age and experience, into your model. Upon reading the regression results, you noticed that the coefficient of “experience” is negative which appears to be counter-intuitive. In addition you have discovered that the coefficients have low t-statistics but the regression model has a high R2. What is the most likely cause of these results?

a. Incorrect standard errors
b. Heteroskedasticity
c. Serial correlation
d. Multicollinearity

Answer: d.

Explanation:

Age and experience are highly correlated and would lead to multicollinearity. In fact, low t-statistics but a high R2 do suggest this problem also. Answers a, b and c are not likely causes and are therefore incorrect.
What do we mean by low t statistics here with reference to multicollinearity?

Thanks a lot.
Priyanka.
 
Last edited by a moderator:

ShaktiRathore

Well-Known Member
Subscriber
1) Here p values are given so just compare p value with the significance level here of .05 if p-value<.05(no table required here) then reject the null hypothesis otherwise accept the null hypothesis. In F test we test the joint hypothesis that is Ho: β1 = 0 and β2 = 0 Ha: atleast one of β1 or β2 !=0 thus if pvalue of Ftest=0.045<.05 we outright reject the null and conclude that atleast one of β1 or β2 !=0.
thanks
 

Nicole Seaman

Director of CFA & FRM Operations
Staff member
Subscriber
1)For a sample of 400 firms, the relationship between corporate revenue (Yi) and the average years of experience per employee (Xi) is modeled as follows:

Yi = β1 + β2 Xi + εi, i = 1, 2,...,400

You wish to test the joint null hypothesis that β1 = 0 and β2 = 0 at the 95% confidence level. The p-value for the t-statistic for β1 is 0.07, and the p-value for the t-statistic for β2 is 0.06. The p-value for the F-statistic for the regression is 0.045. Which of the following statements is correct?

a. You can reject the null hypothesis because each β is different from 0 at the 95% confidence level.
b. You cannot reject the null hypothesis because neither β is different from 0 at the 95% confidence level.
c. You can reject the null hypothesis because the F-statistic is significant at the 95% confidence level.
d. You cannot reject the null hypothesis because the F-statistic is not significant at the 95% confidence level.

Correct answer: c

Explanation: The T-test would not be sufficient to test the joint hypothesis. In order to test the joint null hypothesis, examine the F-statistic, which in this case is statistically significant at the 95% confidence level. Thus the null can be rejected.
Could someone please explain this, how do we solve such questions without a table if it appears this way in the exam?

GARP 2013


2) You built a linear regression model to analyze annual salaries for a developed country. You incorporated two independent variables, age and experience, into your model. Upon reading the regression results, you noticed that the coefficient of “experience” is negative which appears to be counter-intuitive. In addition you have discovered that the coefficients have low t-statistics but the regression model has a high R2. What is the most likely cause of these results?

a. Incorrect standard errors
b. Heteroskedasticity
c. Serial correlation
d. Multicollinearity

Answer: d.

Explanation:

Age and experience are highly correlated and would lead to multicollinearity. In fact, low t-statistics but a high R2 do suggest this problem also. Answers a, b and c are not likely causes and are therefore incorrect.
What do we mean by low t statistics here with reference to multicollinearity?

Thanks a lot.
Priyanka.

Hello @Priyanka_Chandak23

David provided an explanation here for your first question: https://forum.bionicturtle.com/threads/question-9-exam-2013-from-garp.7892/#post-30796. I hope this helps!

Nicole
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
The second question is GARP 2011.P1.1 (https://forum.bionicturtle.com/thre...othesis-testing-quantitative.4127/#post-10954) I copied Gujarati's explanation below. This is the classic multicollinearity symptom: high R^2 but insignificant t ratios.

The first thing is to recognize the apparent paradox: a R^2 associates with a high F-ratio; e.g., F = [R^2/(k-1)]/[(1-R^2)/(n-k)] but we just need the intuition that both reflect on the joint hypothesis about the coefficients. A high R^2 suggests a good fit of the regression line, in general, including most/all of the individual coefficients. Therefore, it is natural to expect from a high R^2 (and the corresponding high F ratio) that most (or all) of the individual coefficients are significant. But low t ratios imply insignificance. T ratio = coefficient/SE(coefficient). In multicollinearity, the standard errors are overly large (as perfect multicollinearity is approached, the standard errors approach infinite). From the other perspective, if we have a set of insignificant regression coefficients (ie, low t ratios due to high standard errors), we would expect neither high R^2 nor high F ratio. So, this is an apparent paradox. But it is the classic symptom of multicollinearity.

Another classic point is: perfect multicollinearity is logical problem with the regression. However, imperfect multicollinearity (i.e., correlated independent variables) is often a realistic model feature and not necessarily a "problem."

Gujarati, Basic Econometrics (Chapter 10: Multicollinearity)
1. High R2 but few significant t ratios. As noted, this is the “classic” symptom of multicollinearity. If R2 is high, say, in excess of 0.8, the F test in most cases will reject the hypothesis that the partial slope coefficients are simultaneously equal to zero, but the individual t tests will show that none or very few of the partial slope coefficients are statistically different from zero. This fact was clearly demonstrated by our consumption–income–wealth example.

Although this diagnostic is sensible, its disadvantage is that “it is too strong in the sense that multicollinearity is considered as harmful only when all of the influences of the explanatory variables on Y cannot be disentangled.
 

aalirahman

Member
Subscriber
Hi David - Can you please explain the following question from GARP mock:

For a sample of 400 firms, the relationship between corporate revenue (Yi) and the average years of experience per employee (Xi) is modeled as follows:
Yi = β1 + β2*Xi + εi, i = 1, 2 ..., 400

An analyst wants to test the joint null hypothesis that β1 = 0 and β2 = 0 at the 95% confidence level. The p- value for the t-statistic for β1 is 0.07, and the p-value for the t-statistic for β2 is 0.06. The p-value for the F- statistic for the regression is 0.045. Which of the following statements is correct?

The analyst can reject the joint null hypothesis because each β is different from 0 at the 95% confidence level.
The analyst cannot reject the joint null hypothesis because neither β is different from 0 at the 95% confidence level.
The analyst can reject the joint null hypothesis because the F-statistic is significant at the 95% confidence level.
The analyst cannot reject the joint null hypothesis because the F-statistic is not significant at the 95% confidence level.

Thanks
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @aalirahman see https://forum.bionicturtle.com/threads/2013-garp-exam-1-q9-remembering-f-statistics.7435/post-29618 i.e.,
Hi @@Ekin4112 The t-stats are used to test the significance of the individual coefficients; e.g., is B1 significant? is B2 significant. The key to the question is (emphasis mine): "You wish to test the joint null hypothesis that β1 = 0 and β2 = 0 at the 95% confidence level." A joint test of several/all coefficients requires an F-stat. With respect to the p-value (aka, exact significance level), I think the best thing to do with a p-value is to insert into this sentence "We are [100% - p%] confident that we can reject the null." (because confidence = 1 - significance). In this case, the p-value of 0.045 implies we can say "We are 95.5% confident we can reject the null" which implies we can reject then null for any confidence <= 95.5% but we cannot for any confidence greater than 95.5%. I hope that helps,
 

Nicole Seaman

Director of CFA & FRM Operations
Staff member
Subscriber
@aalirahman

In addition to David's link above, please note that I moved your post to this thread, which also discusses this practice question. Please make sure to use the forum search to make sure that your question has not already been discussed before posting. Especially right before the exam, it saves everyone time from answering the same question more than once. Thank you.
 
Top