P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message
  1. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
    Replies:
    0
    Views:
    38
  2. Nicole Seaman

    P1.T2.706. Bivariate normal distribution (Hull)

    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    Replies:
    8
    Views:
    83
  3. Nicole Seaman

    P1.T2.705. Correlation (Hull)

    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Replies:
    13
    Views:
    128
  4. Nicole Seaman

    P1.T2.704. Forecasting volatility with GARCH (Hull)

    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) + (1-λ)*λ^2*µ^2(n-3) + .... so keeping in mind that simple historical variance is just an average of squared returns...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) + (1-λ)*λ^2*µ^2(n-3) + .... so keeping in mind that simple historical variance is just an average of squared returns...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) +...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a...
    Replies:
    2
    Views:
    51
  5. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model....
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model...
    Replies:
    0
    Views:
    32
  6. Nicole Seaman

    P1.T2.699. Linear and nonlinear trends (Diebold)

    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct trend!). Then, if we try X = 60 in each of the four trend models, we will pretty quickly see that...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct trend!). Then, if we try X = 60 in each of the four trend models, we will pretty quickly see that...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the...
    Replies:
    2
    Views:
    60
  7. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    23
  8. Nicole Seaman

    P1.T2.701. Regression analysis to model seasonality (Diebold)

    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
    Replies:
    11
    Views:
    87
  9. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
    Replies:
    0
    Views:
    41
  10. Nicole Seaman

    P1.T2.602. Bootstrapping (Brooks)

    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
    Replies:
    4
    Views:
    113
  11. Nicole Seaman

    P1.T2.601. Variance reduction techniques (Brooks)

    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is...
    Replies:
    0
    Views:
    80
  12. Nicole Seaman

    P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each...
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should...
    Replies:
    4
    Views:
    124
  13. Nicole Seaman

    P1.T2.512. Autoregressive moving average (ARMA) processes (Diebold)

    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
    Replies:
    0
    Views:
    79
  14. David Harper CFA FRM

    P1.T2.511. First-order autoregressive, AR(1), process (Diebold)

    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    Replies:
    8
    Views:
    156
  15. Nicole Seaman

    P1.T2.510. First-order and general finite-order moving average process, MA(1) and MA(q) (Diebold)

    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    Replies:
    2
    Views:
    205
  16. Nicole Seaman

    P1.T2.509. Box-Pierce and Ljung-Box Q-statistics (Diebold)

    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Replies:
    3
    Views:
    151
  17. Nicole Seaman

    P1.T2.508. Wold's theorem (Diebold)

    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    Replies:
    4
    Views:
    216
  18. Nicole Seaman

    P1.T2.507. White noise (Diebold)

    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
    Replies:
    0
    Views:
    127
  19. Nicole Seaman

    P1.T2.506. Covariance stationary time series (Diebold)

    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation,...
    Replies:
    6
    Views:
    175
  20. Nicole Seaman

    P1.T2.505. Model selection criteria (Diebold)

    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e,...
    Replies:
    2
    Views:
    255
  21. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    862
  22. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    829
  23. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models (Hull)

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    559
  24. Nicole Seaman

    P1.T2.501. More Bayes Theorem (Miller)

    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection symbol isn't always handy: joint Prob(R, U) = 3.0%. The joint probabilities are inside the square...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection symbol isn't always handy: joint Prob(R, U) = 3.0%. The joint probabilities are inside the square...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint...
    Replies:
    11
    Views:
    340
  25. Nicole Seaman

    P1.T2.500. Bayes theorem (Miller)

    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Replies:
    25
    Views:
    381
  26. Nicole Seaman

    Quiz-T2 P1.T2.409 Volatility, GARCH(1,1) and EWMA

    Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
    Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
    Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha -...
    Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega...
    Replies:
    2
    Views:
    154
  27. Nicole Seaman

    Quiz-T2 P1.T2.408. Multivariate linear regression

    In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large standard errors of the coefficients (independent variables), rendering low t statistics for them,...
    In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large standard errors of the coefficients (independent variables), rendering low t statistics for them,...
    In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large...
    In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not...
    Replies:
    7
    Views:
    196
  28. Nicole Seaman

    Quiz-T2 P1.T2.407. Univariate linear regression

    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Hello @uness_o7 Thank you for pointing this out. I will get this fixed as soon as possible. Nicole
    Replies:
    12
    Views:
    219
  29. Nicole Seaman

    Quiz-T2 P1.T2.406. Distributions II

    Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to test the null hypothesis that the (underlying population) variances are equal. If the population...
    Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to test the null hypothesis that the (underlying population) variances are equal. If the population...
    Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to...
    Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e.,...
    Replies:
    21
    Views:
    286
  30. Nicole Seaman

    Quiz-T2 P1.T2.405. Distributions I

    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z = (mean X - µ)/SQRT[σ(p)^2/n]. But realistically (as is also the case in this question) we don't...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then we need the standard error. If we know the population variance (which is not given) we can assume Z...
    Hi @uness_o7 There are two issues, I think. First, if we were conducting a test of the sample mean (e.g., what is the probability of obtaining a sample mean profit of $25 million next week), then...
    Replies:
    16
    Views:
    430

Thread Display Options

Loading...