# P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
1. ### P1.T2.316. Discrete distributions (Topic review)

Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there is a different path for each sequence. How many paths are there? C(12,6) = 924 combinations.
Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there is a different path for each sequence. How many paths are there? C(12,6) = 924 combinations.
Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there...
Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a...
Replies:
5
Views:
179
2. ### P1.T2.403. Probabilities

Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)=...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice:...
Replies:
11
Views:
214
3. ### L1.T2.107 GARCH/EWMA maximum likelihood method (MLE)

Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
Replies:
6
Views:
176
4. ### P1.T2.314. Miller's one- and two-tailed hypotheses

Hi Melody (@superpocoyo ) I used: 1 - T.DIST(1.1736, 9, true) = 1 - 86.4% = 13.5%. Thanks,
Hi Melody (@superpocoyo ) I used: 1 - T.DIST(1.1736, 9, true) = 1 - 86.4% = 13.5%. Thanks,
Hi Melody (@superpocoyo ) I used: 1 - T.DIST(1.1736, 9, true) = 1 - 86.4% = 13.5%. Thanks,
Hi Melody (@superpocoyo ) I used: 1 - T.DIST(1.1736, 9, true) = 1 - 86.4% = 13.5%. Thanks,
Replies:
9
Views:
215
5. ### L1.T2.118. Student's t distribution

gotcha. thanks. this question came up before the actual study notes by stock and watson
gotcha. thanks. this question came up before the actual study notes by stock and watson
gotcha. thanks. this question came up before the actual study notes by stock and watson
gotcha. thanks. this question came up before the actual study notes by stock and watson
Replies:
3
Views:
67
6. ### P1.T2.512. Autoregressive moving average (ARMA) processes

Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
Replies:
0
Views:
63
7. ### P1.T2.509. Box-Pierce and Ljung-Box Q-statistics

Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Replies:
3
Views:
102
8. ### L1.T2.89 OLS standard errors

Hi David Standard Error: SQRT Variance/n Shouldn't this then be: SQRT [SER^2/Variance (X) * n]/n
Hi David Standard Error: SQRT Variance/n Shouldn't this then be: SQRT [SER^2/Variance (X) * n]/n
Hi David Standard Error: SQRT Variance/n Shouldn't this then be: SQRT [SER^2/Variance (X) * n]/n
Hi David Standard Error: SQRT Variance/n Shouldn't this then be: SQRT [SER^2/Variance (X) * n]/n
Replies:
4
Views:
117
9. ### P1.T2.218. Theory of Ordinary Least Squares (Stock & Watson)

In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both mean the same thing?
In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both mean the same thing?
In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both...
In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution...
Replies:
6
Views:
150
10. ### L1.T2.74 F-distribution

Not to worry - they will give you the lookup table for the F distribution, if they do Thanks! Jayanthi
Not to worry - they will give you the lookup table for the F distribution, if they do Thanks! Jayanthi
Not to worry - they will give you the lookup table for the F distribution, if they do Thanks! Jayanthi
Not to worry - they will give you the lookup table for the F distribution, if they do Thanks! Jayanthi
Replies:
18
Views:
154
11. ### P1.T2.508. Wold's theorem

Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p) order can be expressed as a rational polynomial of lagged errors b. Any purely nondeterministic...
Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p) order can be expressed as a rational polynomial of lagged errors b. Any purely nondeterministic...
Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p)...
Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an...
Replies:
0
Views:
102
12. ### P1.T2.507. White noise

Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
Replies:
0
Views:
89
13. ### P1.T2.206. Variance of sample average

I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
Replies:
20
Views:
524
14. ### L1.T2.77 Confidence interval

Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
Replies:
13
Views:
123
15. ### L1.T2.105 Generalized auto regressive conditional heteroscedasticity, GARCH(p,q)

Thanks you David for taking out time to answer. That clears my doubt. Have a nice evening.
Thanks you David for taking out time to answer. That clears my doubt. Have a nice evening.
Thanks you David for taking out time to answer. That clears my doubt. Have a nice evening.
Thanks you David for taking out time to answer. That clears my doubt. Have a nice evening.
Replies:
8
Views:
145
16. ### P1.T2.208. Sample mean estimators (Stock & Watson)

Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Hi David, I was just referring to the previous discussion to give better understanding to my question Thanks a lot for your time and patience. Praveen
Replies:
21
Views:
355
17. ### P1.T2.409 Volatility, GARCH(1,1) and EWMA

Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha -...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega...
Replies:
2
Views:
115
18. ### L1.T2.128 Simulation with inverse transform method

Hi @Tipo GBM doesn't contain the deviate, GBM models the asset price: price change = drift*Δt + sigma* epsilon* sqrt(Δt). Dowd's market risk VaR = -drift + sigma*deviate precisely because the +drift is positive in GBM. Say drift is 10% and sigma is 30%. We can input those into GBM to model the asset price. But how is risk measures (VaR)? It's a loss which is mitigated by the drift. So...
Hi @Tipo GBM doesn't contain the deviate, GBM models the asset price: price change = drift*Δt + sigma* epsilon* sqrt(Δt). Dowd's market risk VaR = -drift + sigma*deviate precisely because the +drift is positive in GBM. Say drift is 10% and sigma is 30%. We can input those into GBM to model the asset price. But how is risk measures (VaR)? It's a loss which is mitigated by the drift. So...
Hi @Tipo GBM doesn't contain the deviate, GBM models the asset price: price change = drift*Δt + sigma* epsilon* sqrt(Δt). Dowd's market risk VaR = -drift + sigma*deviate precisely because the +drift is positive in GBM. Say drift is 10% and sigma is 30%. We can input those into GBM to model...
Hi @Tipo GBM doesn't contain the deviate, GBM models the asset price: price change = drift*Δt + sigma* epsilon* sqrt(Δt). Dowd's market risk VaR = -drift + sigma*deviate precisely because the...
Replies:
6
Views:
83
19. ### L1.T2.103 Weighting schemes to estimate volatility

Hi @Tipo Per Hull 22.5 the ARCH(m) is given by: \sigma _{n}^{2}=\gamma {{V}_{L}}+\sum\limits_{i=1}^{m}{{{\alpha }_{i}}u_{n-i}^{2}} It generalizes all three: GARCH: positive gamma; i.e., at least some weight assigned to unconditional variance, V(L); and exponential weights where decay is beta EWMA: zero gamma (i.e., no unconditional variance) but exponential weights MA: zero gamma and...
Hi @Tipo Per Hull 22.5 the ARCH(m) is given by: \sigma _{n}^{2}=\gamma {{V}_{L}}+\sum\limits_{i=1}^{m}{{{\alpha }_{i}}u_{n-i}^{2}} It generalizes all three: GARCH: positive gamma; i.e., at least some weight assigned to unconditional variance, V(L); and exponential weights where decay is beta EWMA: zero gamma (i.e., no unconditional variance) but exponential weights MA: zero gamma and...
Hi @Tipo Per Hull 22.5 the ARCH(m) is given by: \sigma _{n}^{2}=\gamma {{V}_{L}}+\sum\limits_{i=1}^{m}{{{\alpha }_{i}}u_{n-i}^{2}} It generalizes all three: GARCH: positive gamma; i.e., at least some weight assigned to unconditional variance, V(L); and exponential weights where decay is...
Hi @Tipo Per Hull 22.5 the ARCH(m) is given by: \sigma _{n}^{2}=\gamma {{V}_{L}}+\sum\limits_{i=1}^{m}{{{\alpha }_{i}}u_{n-i}^{2}} It generalizes all three: GARCH: positive gamma; i.e., at...
Replies:
9
Views:
300
20. ### L1.T2.57 Methodology of Econometrics

Hi @tosuhn this are aged questions (from Gujarati's econometrics which is no longer assigned) so most of this won't appear on the exam. Thanks,
Hi @tosuhn this are aged questions (from Gujarati's econometrics which is no longer assigned) so most of this won't appear on the exam. Thanks,
Hi @tosuhn this are aged questions (from Gujarati's econometrics which is no longer assigned) so most of this won't appear on the exam. Thanks,
Hi @tosuhn this are aged questions (from Gujarati's econometrics which is no longer assigned) so most of this won't appear on the exam. Thanks,
Replies:
4
Views:
63