P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message ↓
  1. Suzanne Evans

    P1.T2.219. Omitted variable bias

    I have a query regarding the following phrase: "(2) when the omitted variable is a determinant of the dependent variable." At first I thought "well if it's a determinant, then by definition there must be some element of correlation" . But is the issue that correlation is only a linear estimate so it may be a non linear determinant with no necessary correlation?
    I have a query regarding the following phrase: "(2) when the omitted variable is a determinant of the dependent variable." At first I thought "well if it's a determinant, then by definition there must be some element of correlation" . But is the issue that correlation is only a linear estimate so it may be a non linear determinant with no necessary correlation?
    I have a query regarding the following phrase: "(2) when the omitted variable is a determinant of the dependent variable." At first I thought "well if it's a determinant, then by definition there must be some element of correlation" . But is the issue that correlation is only a linear estimate...
    I have a query regarding the following phrase: "(2) when the omitted variable is a determinant of the dependent variable." At first I thought "well if it's a determinant, then by definition...
    Replies:
    5
    Views:
    153
  2. Suzanne Evans

    P1.T2.213. Sample variance, covariance and correlation (Stock & Watson)

    Thanks Nicole - appreciate it:) Jayanthi
    Thanks Nicole - appreciate it:) Jayanthi
    Thanks Nicole - appreciate it:) Jayanthi
    Thanks Nicole - appreciate it:) Jayanthi
    Replies:
    10
    Views:
    199
  3. Nicole Seaman

    PQ-T2 P1.T2.320. Statistical inference: hypothesis testing and confidence intervals (topic review)

    Thank you Jayanthi.......... Helps a lot
    Thank you Jayanthi.......... Helps a lot
    Thank you Jayanthi.......... Helps a lot
    Thank you Jayanthi.......... Helps a lot
    Replies:
    5
    Views:
    261
  4. Suzanne Evans

    P1.T2.205 Sampling distributions (Stock & Watson)

    Thanks, David. I agree that your questions go deeper than the notes, which is definitely great for gaining a deep understanding. I'll be honest, I got a little frustrated as I had only gone up to chapters 2 and 3 (per your notes) which map to chapter 1 and 2 in the GARP ebooks (I'm not a big fan of Pearson etext); I thought I was missing some concepts. Going forward, I'll make sure I've...
    Thanks, David. I agree that your questions go deeper than the notes, which is definitely great for gaining a deep understanding. I'll be honest, I got a little frustrated as I had only gone up to chapters 2 and 3 (per your notes) which map to chapter 1 and 2 in the GARP ebooks (I'm not a big fan of Pearson etext); I thought I was missing some concepts. Going forward, I'll make sure I've...
    Thanks, David. I agree that your questions go deeper than the notes, which is definitely great for gaining a deep understanding. I'll be honest, I got a little frustrated as I had only gone up to chapters 2 and 3 (per your notes) which map to chapter 1 and 2 in the GARP ebooks (I'm not a big...
    Thanks, David. I agree that your questions go deeper than the notes, which is definitely great for gaining a deep understanding. I'll be honest, I got a little frustrated as I had only gone up...
    Replies:
    9
    Views:
    280
  5. Nicole Seaman

    PQ-T2 P1.T2.317. Continuous distributions (Topic review)

    Hi @Keshav It's start with the definition of a mixture definition. Any distributions can be mixed, but in this case it happens to be a normal mixture distribution; the probability density function (pdf) of the normal mixture distribution is the result of adding together two "component" normal pdfs. If there are only two component normal pdfs that are "mixed," then the resulting mixture...
    Hi @Keshav It's start with the definition of a mixture definition. Any distributions can be mixed, but in this case it happens to be a normal mixture distribution; the probability density function (pdf) of the normal mixture distribution is the result of adding together two "component" normal pdfs. If there are only two component normal pdfs that are "mixed," then the resulting mixture...
    Hi @Keshav It's start with the definition of a mixture definition. Any distributions can be mixed, but in this case it happens to be a normal mixture distribution; the probability density function (pdf) of the normal mixture distribution is the result of adding together two "component" normal...
    Hi @Keshav It's start with the definition of a mixture definition. Any distributions can be mixed, but in this case it happens to be a normal mixture distribution; the probability density...
    Replies:
    6
    Views:
    249
  6. Suzanne Evans

    P1.T2.204. Joint, marginal, and conditional probability functions (Stock & Watson)

    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard deviation). I think the key here is to realize that, after we grok the conditionality, we are...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) = 58.65 105.859 is the conditional variance which determines the answer of 10.3 (the conditional standard...
    Hi Melody (@superpocoyo ) Here is the spreadsheet @ Please note that, in my response to mastvikas above, I had a typo which I've now corrected. It should read: (10 - 29.38)^2*(0.05/.32) =...
    Replies:
    10
    Views:
    343
  7. Nicole Seaman

    PQ-T2 P1.T2.318. Distributional moments (Topic review)

    Hi @Bester I'm not sure "why" either. Skew and variance of Poisson aren't in the notes; they aren't particularly important. The point of the question is to show how skew and kurtosis are calculated; they happen to be elegant for the Poisson. I think it's good to notice that the Poisson is necessarily positively skewed and heavy-tailed, though. Thanks,
    Hi @Bester I'm not sure "why" either. Skew and variance of Poisson aren't in the notes; they aren't particularly important. The point of the question is to show how skew and kurtosis are calculated; they happen to be elegant for the Poisson. I think it's good to notice that the Poisson is necessarily positively skewed and heavy-tailed, though. Thanks,
    Hi @Bester I'm not sure "why" either. Skew and variance of Poisson aren't in the notes; they aren't particularly important. The point of the question is to show how skew and kurtosis are calculated; they happen to be elegant for the Poisson. I think it's good to notice that the Poisson is...
    Hi @Bester I'm not sure "why" either. Skew and variance of Poisson aren't in the notes; they aren't particularly important. The point of the question is to show how skew and kurtosis are...
    Replies:
    4
    Views:
    122
  8. David Harper CFA FRM

    L1.T2.61 Statistical dependence

    Thank you @joshsmith0221 !
    Thank you @joshsmith0221 !
    Thank you @joshsmith0221 !
    Thank you @joshsmith0221 !
    Replies:
    5
    Views:
    95
  9. Nicole Seaman

    P1.T2.316. Discrete distributions (Topic review)

    Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there is a different path for each sequence. How many paths are there? C(12,6) = 924 combinations.
    Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there is a different path for each sequence. How many paths are there? C(12,6) = 924 combinations.
    Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a different path: d, d, d, d, d, d, u, u, u, u, u, u Any path to 100 involves 6 ups and 6 downs; there...
    Hi Melody (@superpocoyo ) I think visually. Consider the tree (copied below) but I showed two possible paths: In blue is an alternating sequence: u, d, u, d, u, d, u, d, u, d, u, d In red is a...
    Replies:
    5
    Views:
    185
  10. Nicole Seaman

    P1.T2.403. Probabilities

    Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
    Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
    Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)=...
    Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice:...
    Replies:
    11
    Views:
    245
  11. David Harper CFA FRM

    L1.T2.107 GARCH/EWMA maximum likelihood method (MLE)

    Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
    Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
    Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
    Hi @superpocoyo (Melody) I agree, I just did a keyword search of the AIM syllabus and it appears that MLE has been dropped. Thanks,
    Replies:
    6
    Views:
    181
  12. David Harper CFA FRM

    L1.T2.118. Student's t distribution

    gotcha. thanks. this question came up before the actual study notes by stock and watson
    gotcha. thanks. this question came up before the actual study notes by stock and watson
    gotcha. thanks. this question came up before the actual study notes by stock and watson
    gotcha. thanks. this question came up before the actual study notes by stock and watson
    Replies:
    3
    Views:
    69
  13. Nicole Seaman

    P1.T2.512. Autoregressive moving average (ARMA) processes

    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
    Replies:
    0
    Views:
    68
  14. Nicole Seaman

    P1.T2.509. Box-Pierce and Ljung-Box Q-statistics

    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Replies:
    3
    Views:
    111
  15. Suzanne Evans

    P1.T2.218. Theory of Ordinary Least Squares (Stock & Watson)

    In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both mean the same thing?
    In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both mean the same thing?
    In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not depend on X(i)." Is both...
    In reference to homoskedastic: sometimes it is mentioned "variance constant" and other times "mean zero"... "The error term u(i) is homoskedastic if the variance of the conditional distribution...
    Replies:
    6
    Views:
    157
  16. David Harper CFA FRM

    L1.T2.74 F-distribution

    Not to worry - they will give you the lookup table for the F distribution, if they do:) Thanks! Jayanthi
    Not to worry - they will give you the lookup table for the F distribution, if they do:) Thanks! Jayanthi
    Not to worry - they will give you the lookup table for the F distribution, if they do:) Thanks! Jayanthi
    Not to worry - they will give you the lookup table for the F distribution, if they do:) Thanks! Jayanthi
    Replies:
    18
    Views:
    157
  17. Nicole Seaman

    P1.T2.508. Wold's theorem

    Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p) order can be expressed as a rational polynomial of lagged errors b. Any purely nondeterministic...
    Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p) order can be expressed as a rational polynomial of lagged errors b. Any purely nondeterministic...
    Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an appropriate model for a covariance stationary residual such that: a. Any autoregressive process of (p)...
    Learning outcomes: Describe Wold’s theorem. Define a general linear process. Relate rational distributed lags to Wold’s theorem Questions: 508.1. Wold's representation theorem points to an...
    Replies:
    0
    Views:
    118
  18. Nicole Seaman

    P1.T2.507. White noise

    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
    Replies:
    0
    Views:
    100
  19. Suzanne Evans

    P1.T2.206. Variance of sample average

    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    I am asking kind of dumb question, but where is this formula in the Miller Chapter (please tell me reference in David's Pdf)
    Replies:
    20
    Views:
    557
  20. David Harper CFA FRM

    L1.T2.77 Confidence interval

    Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
    Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
    Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
    Hi @Jayanthi Sankaran yes, you are correct. Fixed above. Thanks!
    Replies:
    13
    Views:
    130

Thread Display Options

Loading...