P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message
  1. Nicole Seaman

    P1.T2.719. One- versus two-tailed hypothesis tests (Miller Ch.7)

    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Replies:
    8
    Views:
    91
  2. Nicole Seaman

    P1.T2.718. Confidence in the mean and variance (Miller Ch.7)

    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796)...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were...
    Replies:
    2
    Views:
    46
  3. David Harper CFA FRM

    P1.T2.717. Bayes' Theorem (Miller, Ch.6)

    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good...
    Replies:
    2
    Views:
    53
  4. David Harper CFA FRM

    P1.T2.716. Central limit theore and mixture distributions (Miller, Ch 4)

    @Alfahad Exactly. Your first attempt looks correct, but if the distribution were adding normal random variable X plus normal random variable Y (and you are assuming independent in doing so, it appears). But the mixture distribution is more like you say "its a distribution that follows either normal depending on state;" i.e., the outcome is not the sum but rather it is firstly either...
    @Alfahad Exactly. Your first attempt looks correct, but if the distribution were adding normal random variable X plus normal random variable Y (and you are assuming independent in doing so, it appears). But the mixture distribution is more like you say "its a distribution that follows either normal depending on state;" i.e., the outcome is not the sum but rather it is firstly either...
    @Alfahad Exactly. Your first attempt looks correct, but if the distribution were adding normal random variable X plus normal random variable Y (and you are assuming independent in doing so, it appears). But the mixture distribution is more like you say "its a distribution that follows either...
    @Alfahad Exactly. Your first attempt looks correct, but if the distribution were adding normal random variable X plus normal random variable Y (and you are assuming independent in doing so, it...
    Replies:
    3
    Views:
    49
  5. Nicole Seaman

    P1.T2.715.Chi-squared distribution, Student’s t, and F-distributions (Miller Ch.4)

    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions,...
    Replies:
    0
    Views:
    39
  6. Nicole Seaman

    P1.T2.714. Lognormal distribution (Miller Chapter 4)

    Thanks @emilioalzamora1 ! In the first case, you are "scaling up" the distribution over time; the drift is scaling by ΔT (well almost, it's dragged down by volatility) and the variance is scaling by ΔT. So it's similar to scaling a one-month asset distribution to one-year, or further scaling up to three years: you might start with a 10-day distribution, but you end up with a distribution...
    Thanks @emilioalzamora1 ! In the first case, you are "scaling up" the distribution over time; the drift is scaling by ΔT (well almost, it's dragged down by volatility) and the variance is scaling by ΔT. So it's similar to scaling a one-month asset distribution to one-year, or further scaling up to three years: you might start with a 10-day distribution, but you end up with a distribution...
    Thanks @emilioalzamora1 ! In the first case, you are "scaling up" the distribution over time; the drift is scaling by ΔT (well almost, it's dragged down by volatility) and the variance is scaling by ΔT. So it's similar to scaling a one-month asset distribution to one-year, or further scaling up...
    Thanks @emilioalzamora1 ! In the first case, you are "scaling up" the distribution over time; the drift is scaling by ΔT (well almost, it's dragged down by volatility) and the variance is scaling...
    Replies:
    5
    Views:
    81
  7. Nicole Seaman

    P1.T2.713. Uniform, binomial, Poisson distributions (Miller Ch.4)

    Hi @jshi yes, you are correct, I fixed the typo above, although it looks like the rest is okay because notice I did use (b-a)^2/12 when solving: σ^2(A) = (3-0)^2/12 = 0.75 and σ^2(B) = (10-4)^2/12 = 3.0. Thank you!
    Hi @jshi yes, you are correct, I fixed the typo above, although it looks like the rest is okay because notice I did use (b-a)^2/12 when solving: σ^2(A) = (3-0)^2/12 = 0.75 and σ^2(B) = (10-4)^2/12 = 3.0. Thank you!
    Hi @jshi yes, you are correct, I fixed the typo above, although it looks like the rest is okay because notice I did use (b-a)^2/12 when solving: σ^2(A) = (3-0)^2/12 = 0.75 and σ^2(B) = (10-4)^2/12 = 3.0. Thank you!
    Hi @jshi yes, you are correct, I fixed the typo above, although it looks like the rest is okay because notice I did use (b-a)^2/12 when solving: σ^2(A) = (3-0)^2/12 = 0.75 and σ^2(B) =...
    Replies:
    2
    Views:
    102
  8. Nicole Seaman

    P1.T2.712. Skew, kurtosis, coskew and cokurtosis (Miller, Chapter 3)

    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical...
    Replies:
    0
    Views:
    46
  9. Nicole Seaman

    P1.T2.711. Covariance and correlation (Miller, Ch.3)

    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following...
    Replies:
    0
    Views:
    65
  10. Nicole Seaman

    P1.T2.710. Mean and standard deviation (Miller, Ch.3)

    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need (3/256)*4^4 = (3/256)*256 = 3.0. For the variance, we want the integral of (x - 3)*3*x^2/64 =...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need (3/256)*4^4 = (3/256)*256 = 3.0. For the variance, we want the integral of (x - 3)*3*x^2/64 =...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4....
    Replies:
    2
    Views:
    149
  11. Nicole Seaman

    P1.T2.709. Joint probability matrices (Miller Ch.2)

    Got it. Thanks!
    Got it. Thanks!
    Got it. Thanks!
    Got it. Thanks!
    Replies:
    5
    Views:
    97
  12. Nicole Seaman

    P1.T2.708. Probability function fundamentals (Miller Ch. 2)

    Thanks a ton David. Really appreciate the explanation and the super quick response. :):):):)
    Thanks a ton David. Really appreciate the explanation and the super quick response. :):):):)
    Thanks a ton David. Really appreciate the explanation and the super quick response. :):):):)
    Thanks a ton David. Really appreciate the explanation and the super quick response. :):):):)
    Replies:
    9
    Views:
    163
  13. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Thank you David.
    Thank you David.
    Thank you David.
    Thank you David.
    Replies:
    3
    Views:
    90
  14. Nicole Seaman

    P1.T2.706. Bivariate normal distribution (Hull)

    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    Replies:
    8
    Views:
    129
  15. Nicole Seaman

    P1.T2.705. Correlation (Hull)

    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Replies:
    13
    Views:
    222
  16. Nicole Seaman

    P1.T2.704. Forecasting volatility with GARCH (Hull)

    Hi @PaulTomlin I don't think the "rate" has an special significance here: the terms in this GARCH-based projection are variances (true, Hull calls them "variance rates";) which are the square of volatility. They tend to differ in the periodicity; i.e, daily variance versus per annum variance, but the "rate" does not to my knowledge introduce any distinction. If the daily volatility is 1.0%,...
    Hi @PaulTomlin I don't think the "rate" has an special significance here: the terms in this GARCH-based projection are variances (true, Hull calls them "variance rates";) which are the square of volatility. They tend to differ in the periodicity; i.e, daily variance versus per annum variance, but the "rate" does not to my knowledge introduce any distinction. If the daily volatility is 1.0%,...
    Hi @PaulTomlin I don't think the "rate" has an special significance here: the terms in this GARCH-based projection are variances (true, Hull calls them "variance rates";) which are the square of volatility. They tend to differ in the periodicity; i.e, daily variance versus per annum variance,...
    Hi @PaulTomlin I don't think the "rate" has an special significance here: the terms in this GARCH-based projection are variances (true, Hull calls them "variance rates";) which are the square of...
    Replies:
    8
    Views:
    146
  17. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Thank you @lRRAngle for reporting! Sorry, good catch. @Nicole Seaman It does appear to be missing the "=" sign (the rest looks okay)
    Thank you @lRRAngle for reporting! Sorry, good catch. @Nicole Seaman It does appear to be missing the "=" sign (the rest looks okay)
    Thank you @lRRAngle for reporting! Sorry, good catch. @Nicole Seaman It does appear to be missing the "=" sign (the rest looks okay)
    Thank you @lRRAngle for reporting! Sorry, good catch. @Nicole Seaman It does appear to be missing the "=" sign (the rest looks okay)
    Replies:
    8
    Views:
    133
  18. Nicole Seaman

    P1.T2.699. Linear and nonlinear trends (Diebold)

    @truongngo Thank you for pointing this out. I will make sure that this is fixed. Nicole
    @truongngo Thank you for pointing this out. I will make sure that this is fixed. Nicole
    @truongngo Thank you for pointing this out. I will make sure that this is fixed. Nicole
    @truongngo Thank you for pointing this out. I will make sure that this is fixed. Nicole
    Replies:
    19
    Views:
    257
  19. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    @BoobyMiles looks like you are using 0.00410/9 rather than 0.00441/9
    @BoobyMiles looks like you are using 0.00410/9 rather than 0.00441/9
    @BoobyMiles looks like you are using 0.00410/9 rather than 0.00441/9
    @BoobyMiles looks like you are using 0.00410/9 rather than 0.00441/9
    Replies:
    2
    Views:
    71
  20. Nicole Seaman

    P1.T2.701. Regression analysis to model seasonality (Diebold)

    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
    Replies:
    11
    Views:
    136
  21. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    Replies:
    5
    Views:
    99
  22. Nicole Seaman

    P1.T2.602. Bootstrapping (Brooks)

    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
    Replies:
    4
    Views:
    146
  23. Nicole Seaman

    P1.T2.601. Variance reduction techniques (Brooks)

    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is...
    Replies:
    0
    Views:
    106
  24. Nicole Seaman

    P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

    @lRRAngle if var(x) is the variance of an estimate over (N) replications in a simulation, then the standard error (of the mean) of this estimate, S = sqrt[var(x)/N]; this is thematic per CLT: the standard error scales by 1/sqrt(N). Paul wants S no greater than 0.10, so he wants 0.10 = sqrt(36/N); ie, he is trying to find out how many trials he needs to get the SE down to 10. As 0.10 =...
    @lRRAngle if var(x) is the variance of an estimate over (N) replications in a simulation, then the standard error (of the mean) of this estimate, S = sqrt[var(x)/N]; this is thematic per CLT: the standard error scales by 1/sqrt(N). Paul wants S no greater than 0.10, so he wants 0.10 = sqrt(36/N); ie, he is trying to find out how many trials he needs to get the SE down to 10. As 0.10 =...
    @lRRAngle if var(x) is the variance of an estimate over (N) replications in a simulation, then the standard error (of the mean) of this estimate, S = sqrt[var(x)/N]; this is thematic per CLT: the standard error scales by 1/sqrt(N). Paul wants S no greater than 0.10, so he wants 0.10 =...
    @lRRAngle if var(x) is the variance of an estimate over (N) replications in a simulation, then the standard error (of the mean) of this estimate, S = sqrt[var(x)/N]; this is thematic per CLT: the...
    Replies:
    6
    Views:
    170
  25. Nicole Seaman

    P1.T2.512. Autoregressive moving average (ARMA) processes (Diebold)

    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
    Replies:
    0
    Views:
    90
  26. David Harper CFA FRM

    P1.T2.511. First-order autoregressive, AR(1), process (Diebold)

    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    Replies:
    8
    Views:
    173
  27. Nicole Seaman

    P1.T2.510. First-order and general finite-order moving average process, MA(1) and MA(q) (Diebold)

    @lRRAngle The MA(1) has both an unconditional and a conditional mean; Q 510.1 is just asking about an updated realization
    @lRRAngle The MA(1) has both an unconditional and a conditional mean; Q 510.1 is just asking about an updated realization
    @lRRAngle The MA(1) has both an unconditional and a conditional mean; Q 510.1 is just asking about an updated realization
    @lRRAngle The MA(1) has both an unconditional and a conditional mean; Q 510.1 is just asking about an updated realization
    Replies:
    5
    Views:
    270
  28. Nicole Seaman

    P1.T2.509. Box-Pierce and Ljung-Box Q-statistics (Diebold)

    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Replies:
    3
    Views:
    214
  29. Nicole Seaman

    P1.T2.508. Wold's theorem (Diebold)

    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean...
    Replies:
    5
    Views:
    341
  30. Nicole Seaman

    P1.T2.507. White noise (Diebold)

    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
    Replies:
    0
    Views:
    163

Thread Display Options

Loading...