P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message
  1. ziminli1228

    Correlation update and volatility forcasting

    Hi @ziminli1228 I almost didn't know to which problem etc you are referring; more specific references can be helpful. I figured it out really only because I've had to do these calcs several times. In order to update the correlation, from yesterday ρ(n-1) to today ρ(n), needed are both (i) an updated covariance between X and Y and (ii) updated variances/StdDevs of each of X and Y so that we...
    Hi @ziminli1228 I almost didn't know to which problem etc you are referring; more specific references can be helpful. I figured it out really only because I've had to do these calcs several times. In order to update the correlation, from yesterday ρ(n-1) to today ρ(n), needed are both (i) an updated covariance between X and Y and (ii) updated variances/StdDevs of each of X and Y so that we...
    Hi @ziminli1228 I almost didn't know to which problem etc you are referring; more specific references can be helpful. I figured it out really only because I've had to do these calcs several times. In order to update the correlation, from yesterday ρ(n-1) to today ρ(n), needed are both (i) an...
    Hi @ziminli1228 I almost didn't know to which problem etc you are referring; more specific references can be helpful. I figured it out really only because I've had to do these calcs several...
    Replies:
    1
    Views:
    11
  2. Nicole Seaman

    P1.T2.719. One- versus two-tailed hypothesis tests (Miller Ch.7)

    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Thank you so very much @David Harper CFA FRM - Crystal clear now.
    Replies:
    8
    Views:
    84
  3. Nicole Seaman

    P1.T2.718. Confidence in the mean and variance (Miller Ch.7)

    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796) such that we could approximate the p-value as 7.50% (it won't be exactly correct as the underlying...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were 1.580, that's halfway between the displayed values at 0.10 and 0.50 (i.e., between 1.363 and 1.796)...
    Hi @FlorenceCC By design, this question produced a critical value that just happens to be displayed on the lookup table. We can interpolate to approximate; for example, if the test statistic were...
    Replies:
    2
    Views:
    38
  4. David Harper CFA FRM

    P1.T2.717. Bayes' Theorem (Miller, Ch.6)

    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the probabilities based on this evidence. The posterior probability that the model is bad thusly increases...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good and 20.0% that it is bad. Then we observe two exceptions in a row; we employ bayes to revise the...
    Hi @FRM candidate Because we still don't know if the model is good or bad :cool:! Before the introduction of any evidence, the prior (unconditional) probabilities are: 80% probability the model is good...
    Replies:
    2
    Views:
    34
  5. David Harper CFA FRM

    P1.T2.716. Central limit theore and mixture distributions (Miller, Ch 4)

    Learning objectives: Describe the central limit theorem and the implications it has when combining independent and identically distributed (i.i.d.) random variables. Describe i.i.d. random variables and the implications of the i.i.d. assumption when combining random variables. Describe a mixture distribution and explain the creation and characteristics of mixture...
    Learning objectives: Describe the central limit theorem and the implications it has when combining independent and identically distributed (i.i.d.) random variables. Describe i.i.d. random variables and the implications of the i.i.d. assumption when combining random variables. Describe a mixture distribution and explain the creation and characteristics of mixture...
    Learning objectives: Describe the central limit theorem and the implications it has when combining independent and identically distributed (i.i.d.) random variables. Describe i.i.d. random variables and the implications of the i.i.d. assumption when combining random variables. Describe a mixture...
    Learning objectives: Describe the central limit theorem and the implications it has when combining independent and identically distributed (i.i.d.) random variables. Describe i.i.d. random...
    Replies:
    0
    Views:
    19
  6. Nicole Seaman

    P1.T2.715.Chi-squared distribution, Student’s t, and F-distributions (Miller Ch.4)

    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables (note that each also contains an example): cumulative standard normal distribution, student's t...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions, please rely on the statistical lookup tables provided here . This document contains four lookup tables...
    Learning Objectives: Distinguish the key properties among the following distributions: ... Chi-squared distribution, Student’s t, and F-distributions. Questions: For the following questions,...
    Replies:
    0
    Views:
    27
  7. Nicole Seaman

    P1.T2.714. Lognormal distribution (Miller Chapter 4)

    could you please explain deeper the exercise n.1? thank you
    could you please explain deeper the exercise n.1? thank you
    could you please explain deeper the exercise n.1? thank you
    could you please explain deeper the exercise n.1? thank you
    Replies:
    1
    Views:
    49
  8. Nicole Seaman

    P1.T2.713. Uniform, binomial, Poisson distributions (Miller Ch.4)

    Learning Objectives: Distinguish the key properties among the following distributions: uniform distribution, Binomial distribution, Poisson distribution. Questions: 713.1. Let each A_uniform and B_uniform represent independent random uniform variables, where A_uniform falls on the interval between (0, 3) and B_uniform falls on the interval from (4, 10). Which of the following is nearest to...
    Learning Objectives: Distinguish the key properties among the following distributions: uniform distribution, Binomial distribution, Poisson distribution. Questions: 713.1. Let each A_uniform and B_uniform represent independent random uniform variables, where A_uniform falls on the interval between (0, 3) and B_uniform falls on the interval from (4, 10). Which of the following is nearest to...
    Learning Objectives: Distinguish the key properties among the following distributions: uniform distribution, Binomial distribution, Poisson distribution. Questions: 713.1. Let each A_uniform and B_uniform represent independent random uniform variables, where A_uniform falls on the interval...
    Learning Objectives: Distinguish the key properties among the following distributions: uniform distribution, Binomial distribution, Poisson distribution. Questions: 713.1. Let each A_uniform and...
    Replies:
    0
    Views:
    76
  9. Nicole Seaman

    P1.T2.712. Skew, kurtosis, coskew and cokurtosis (Miller, Chapter 3)

    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best linear unbiased estimator. Questions: 712.1. Consider the following discrete probability...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis. Describe and interpret the best...
    Learning objectives: Describe the four central moments of a statistical variable or distribution: mean, variance, skewness, and kurtosis. Interpret the skewness and kurtosis of a statistical...
    Replies:
    0
    Views:
    38
  10. Nicole Seaman

    P1.T2.711. Covariance and correlation (Miller, Ch.3)

    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an unemployment outcome, U = {5, 7 or 9}. Also shown are the expected values and variances for each...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following probability matrix displays joint probabilities for an inflation outcome, I = {2, 3, or 4}, and an...
    Learning objectives: Calculate and interpret the covariance and correlation between two random variables. Calculate the mean and variance of sums of variables. Questions: 711.1. The following...
    Replies:
    0
    Views:
    55
  11. Nicole Seaman

    P1.T2.710. Mean and standard deviation (Miller, Ch.3)

    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need (3/256)*4^4 = (3/256)*256 = 3.0. For the variance, we want the integral of (x - 3)*3*x^2/64 =...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need (3/256)*4^4 = (3/256)*256 = 3.0. For the variance, we want the integral of (x - 3)*3*x^2/64 =...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4. To evaluate this integral over {0,4} we have (3/256)*4^4 - (3/256)*0^4; i.e., we really only need...
    Hi @drewmanfsu You really only need the power rule, see For the mean, x*f(x) = x*3*x^2/64 = 3*x^3/64 = 3/64*x^3. Applying the power rule, the integral of (3/64)*x^3 = (3/64)*x^4/4 = (3/256)*x^4....
    Replies:
    2
    Views:
    87
  12. Nicole Seaman

    P1.T2.709. Joint probability matrices (Miller Ch.2)

    Learning objectives: Distinguish between independent and mutually exclusive events. Define joint probability, describe a probability matrix, and calculate joint probabilities using probability matrices. Define and calculate a conditional probability, and distinguish between conditional and unconditional probabilities. Questions: 709.1. The following probability matrix gives the joint...
    Learning objectives: Distinguish between independent and mutually exclusive events. Define joint probability, describe a probability matrix, and calculate joint probabilities using probability matrices. Define and calculate a conditional probability, and distinguish between conditional and unconditional probabilities. Questions: 709.1. The following probability matrix gives the joint...
    Learning objectives: Distinguish between independent and mutually exclusive events. Define joint probability, describe a probability matrix, and calculate joint probabilities using probability matrices. Define and calculate a conditional probability, and distinguish between conditional and...
    Learning objectives: Distinguish between independent and mutually exclusive events. Define joint probability, describe a probability matrix, and calculate joint probabilities using probability...
    Replies:
    0
    Views:
    51
  13. Nicole Seaman

    P1.T2.708. Probability function fundamentals (Miller Ch. 2)

    HI @mansoor_memon We apply the binomial pmf (see, ) where the Prob (X = k) = C(n,k)*p^k*(1-p)^(n-k); in this case, P(X = 1 | p = 0.050 and n = 30) = C(30, 1)*0.050^1*(1-0.050)^(30-1) = 30*0.050^1*0.950^29 = 0.338903311. Thanks,
    HI @mansoor_memon We apply the binomial pmf (see, ) where the Prob (X = k) = C(n,k)*p^k*(1-p)^(n-k); in this case, P(X = 1 | p = 0.050 and n = 30) = C(30, 1)*0.050^1*(1-0.050)^(30-1) = 30*0.050^1*0.950^29 = 0.338903311. Thanks,
    HI @mansoor_memon We apply the binomial pmf (see, ) where the Prob (X = k) = C(n,k)*p^k*(1-p)^(n-k); in this case, P(X = 1 | p = 0.050 and n = 30) = C(30, 1)*0.050^1*(1-0.050)^(30-1) = 30*0.050^1*0.950^29 = 0.338903311. Thanks,
    HI @mansoor_memon We apply the binomial pmf (see, ) where the Prob (X = k) = C(n,k)*p^k*(1-p)^(n-k); in this case, P(X = 1 | p = 0.050 and n = 30) = C(30, 1)*0.050^1*(1-0.050)^(30-1) =...
    Replies:
    6
    Views:
    109
  14. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Thank you David.
    Thank you David.
    Thank you David.
    Thank you David.
    Replies:
    3
    Views:
    77
  15. Nicole Seaman

    P1.T2.706. Bivariate normal distribution (Hull)

    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    Replies:
    8
    Views:
    119
  16. Nicole Seaman

    P1.T2.705. Correlation (Hull)

    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Replies:
    13
    Views:
    201
  17. Nicole Seaman

    P1.T2.704. Forecasting volatility with GARCH (Hull)

    Thanks - much appreciate the extra detail David. I knew that omega = weight * long-run variance. What was confusing me generally was knowing from the question (whether variance or covariance/ correlation omega), when to do the omega -> variance-only conversion i.e. when to leave omega as is or split it up
    Thanks - much appreciate the extra detail David. I knew that omega = weight * long-run variance. What was confusing me generally was knowing from the question (whether variance or covariance/ correlation omega), when to do the omega -> variance-only conversion i.e. when to leave omega as is or split it up
    Thanks - much appreciate the extra detail David. I knew that omega = weight * long-run variance. What was confusing me generally was knowing from the question (whether variance or covariance/ correlation omega), when to do the omega -> variance-only conversion i.e. when to leave omega as is or...
    Thanks - much appreciate the extra detail David. I knew that omega = weight * long-run variance. What was confusing me generally was knowing from the question (whether variance or covariance/...
    Replies:
    6
    Views:
    114
  18. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Hi [USER=48426]@ That's really just an elaboration on Hull's End of Chapter Question 11.6, and I can't see any problem with it, although it's highly tedious! :eek: I entered into into the more generic version (in our learning XLS), see here (and below) at It's true that there are two omega (ω) assumptions given but that looks okay (consistent) with Hull. Please note that GARCH is used twice...
    Hi [USER=48426]@ That's really just an elaboration on Hull's End of Chapter Question 11.6, and I can't see any problem with it, although it's highly tedious! :eek: I entered into into the more generic version (in our learning XLS), see here (and below) at It's true that there are two omega (ω) assumptions given but that looks okay (consistent) with Hull. Please note that GARCH is used twice...
    Hi [USER=48426]@ That's really just an elaboration on Hull's End of Chapter Question 11.6, and I can't see any problem with it, although it's highly tedious! :eek: I entered into into the more generic version (in our learning XLS), see here (and below) at It's true that there are two omega (ω)...
    Hi [USER=48426]@ That's really just an elaboration on Hull's End of Chapter Question 11.6, and I can't see any problem with it, although it's highly tedious! :eek: I entered into into the more...
    Replies:
    6
    Views:
    107
  19. Nicole Seaman

    P1.T2.699. Linear and nonlinear trends (Diebold)

    Thank you - That's really helpful!
    Thank you - That's really helpful!
    Thank you - That's really helpful!
    Thank you - That's really helpful!
    Replies:
    17
    Views:
    214
  20. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    42
  21. Nicole Seaman

    P1.T2.701. Regression analysis to model seasonality (Diebold)

    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
    Replies:
    11
    Views:
    116
  22. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    @David Harper CFA FRM got it!! It’s a really important question to understand the difference between s=n dummy variables or s=n-1 plus an intercept, thank you.
    Replies:
    5
    Views:
    87
  23. Nicole Seaman

    P1.T2.602. Bootstrapping (Brooks)

    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
    Replies:
    4
    Views:
    136
  24. Nicole Seaman

    P1.T2.601. Variance reduction techniques (Brooks)

    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is...
    Replies:
    0
    Views:
    93
  25. Nicole Seaman

    P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each...
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should...
    Replies:
    4
    Views:
    146
  26. Nicole Seaman

    P1.T2.512. Autoregressive moving average (ARMA) processes (Diebold)

    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
    Replies:
    0
    Views:
    85
  27. David Harper CFA FRM

    P1.T2.511. First-order autoregressive, AR(1), process (Diebold)

    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    Replies:
    8
    Views:
    173
  28. Nicole Seaman

    P1.T2.510. First-order and general finite-order moving average process, MA(1) and MA(q) (Diebold)

    In the study notes it is explained that "If the inverses of all roots of Θ(L) are inside the unit circle, then the process is invertible". Pg 42 of quantitative analysis - Diebold, Elements of Forecasting, Chapters 5, 6 7 & 8
    In the study notes it is explained that "If the inverses of all roots of Θ(L) are inside the unit circle, then the process is invertible". Pg 42 of quantitative analysis - Diebold, Elements of Forecasting, Chapters 5, 6 7 & 8
    In the study notes it is explained that "If the inverses of all roots of Θ(L) are inside the unit circle, then the process is invertible". Pg 42 of quantitative analysis - Diebold, Elements of Forecasting, Chapters 5, 6 7 & 8
    In the study notes it is explained that "If the inverses of all roots of Θ(L) are inside the unit circle, then the process is invertible". Pg 42 of quantitative analysis - Diebold, Elements of...
    Replies:
    3
    Views:
    242
  29. Nicole Seaman

    P1.T2.509. Box-Pierce and Ljung-Box Q-statistics (Diebold)

    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Replies:
    3
    Views:
    184
  30. Nicole Seaman

    P1.T2.508. Wold's theorem (Diebold)

    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean and variances, how could our conditional mean vary over time with the information set? :confused: Thank you!
    Hi David I am slightly confused by question 508.2 - if Wold's theorem is based on an infinite series of white noise, and themselves have both constant/finite conditional and unconditional mean...
    Replies:
    5
    Views:
    302

Thread Display Options

Loading...