P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message
  1. Nicole Seaman

    P1.T2.707. Gaussian Copula (Hull)

    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative bivariate normal distribution with a correlation parameter, ρ, of 0.30. If V(1) and V(2) are each...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate copula, and one-factor copula. Questions: 707.1. Below are the joint probabilities for a cumulative...
    Learning objectives: Define copula and describe the key properties of copulas and copula correlation. Explain tail dependence. Describe the Gaussian copula, Student’s t-copula, multivariate...
    Replies:
    0
    Views:
    33
  2. Nicole Seaman

    P1.T2.706. Bivariate normal distribution (Hull)

    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    @David Harper CFA FRM, makes perfect sense now. thanks for taking the time again.
    Replies:
    8
    Views:
    72
  3. Nicole Seaman

    P1.T2.705. Correlation (Hull)

    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Thank you emilioalzamora and David for such a detailed explanation.
    Replies:
    13
    Views:
    120
  4. Nicole Seaman

    P1.T2.704. Forecasting volatility with GARCH (Hull)

    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) + (1-λ)*λ^2*µ^2(n-3) + .... so keeping in mind that simple historical variance is just an average of squared returns...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) + (1-λ)*λ^2*µ^2(n-3) + .... so keeping in mind that simple historical variance is just an average of squared returns...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a recursive solution to the (infinite) series σ^2(n) = (1-λ)*µ^2(n-1) + (1-λ)*λ*µ^2(n-2) +...
    HI @jjman2000 Not from volatility. EWMA is easier to understand first, I think. As Hull shows, the EWMA formula for the estimate of current variance, σ^2(n) = λ*σ^2(n-1) + (1-λ)*µ^2(n-1), is a...
    Replies:
    2
    Views:
    45
  5. Nicole Seaman

    P1.T2.703. EWMA versus GARCH volatility (Hull)

    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model. Questions: 703.1. The most recent estimate of the daily volatility of an asset is 4.0% and the price...
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model for estimating volatility and its properties. Calculate volatility using the GARCH(1,1) model....
    Learning objectives: Apply the exponentially weighted moving average (EWMA) model to estimate volatility. Describe the generalized autoregressive conditional heteroskedasticity (GARCH(p,q)) model...
    Replies:
    0
    Views:
    28
  6. Nicole Seaman

    P1.T2.699. Linear and nonlinear trends (Diebold)

    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct trend!). Then, if we try X = 60 in each of the four trend models, we will pretty quickly see that...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct trend!). Then, if we try X = 60 in each of the four trend models, we will pretty quickly see that...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the trend is near the y = 0 axis. (actually it's y = 10, but that's close enough to identify the correct...
    Hi [USER=46018]@ I showed all four graphics in the answer specifically to illustrate the various dynamics. I think there are several approaches. One shortcut is to observe that at time = 60, the...
    Replies:
    2
    Views:
    54
  7. Nicole Seaman

    P1.T2.702. Simple (equally weighted) historical volatility (Hull)

    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most recent trading day (this is similar to Hull's Table 10.3) along with daily log returns, squared...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating volatility. Questions 702.1. Consider the following series of closing stock prices over the tend most...
    Learning objectives: Define and distinguish between volatility, variance rate, and implied volatility. Describe the power law. Explain how various weighting schemes can be used in estimating...
    Replies:
    0
    Views:
    20
  8. Nicole Seaman

    P1.T2.701. Regression analysis to model seasonality (Diebold)

    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness of fit" tests are technically quite complex. Again, thanks for your like!
    Many thanks for you lovely comment, Brian. It is nothing special I guess, I have just read a few textbooks that's it. The modelling (implementation work) is another kettle of fish. These "goodness...
    Replies:
    11
    Views:
    85
  9. Nicole Seaman

    P1.T2.700. Seasonality in time series analysis (Diebold)

    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded from a weather station once per year d. Return on average assets (ROA) for the large commercial bank...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal pattern? a. Price of solar panels b. Employment participation rate c. Climate data data recorded...
    Learning objective: Describe the sources of seasonality and how to deal with it in time series analysis. Questions 700.1. Which of the following time series is MOST LIKELY to contain a seasonal...
    Replies:
    0
    Views:
    38
  10. uness_o7

    R16.P1.T2. Hull - expected value of u(n+t-1)^2

    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is...
    You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E(...
    Replies:
    1
    Views:
    17
  11. PortoMarco79

    Miller, Chapter 2 video: Probabilities

    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Amazing response @David Harper CFA FRM . Thanks so much.
    Replies:
    4
    Views:
    26
  12. bbeckett

    P1.T2.305. Minimum variance hedge (Miller)

    Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
    Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
    Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
    Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to...
    Replies:
    4
    Views:
    34
  13. cabrown085

    Variance and Covariance Calculation Clarification

    Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was with samples it may be difficult to assign a true distribution, so instead for a sample mean, you...
    Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was with samples it may be difficult to assign a true distribution, so instead for a sample mean, you...
    Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was...
    Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can...
    Replies:
    3
    Views:
    22
  14. cabrown085

    Uses of the Probability Density Function versus the Cumulative Distribution Function

    a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the derivative: if F(X) = 1/6*x, then f(X) = F'(X) = 1/6; ie the pmf of a fair die is f(x) = 1/6 if f(x) = ax +...
    a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the derivative: if F(X) = 1/6*x, then f(X) = F'(X) = 1/6; ie the pmf of a fair die is f(x) = 1/6 if f(x) = ax +...
    a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the...
    a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF...
    Replies:
    4
    Views:
    31
  15. Nicole Seaman

    P1.T2.602. Bootstrapping (Brooks)

    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
    a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
    Replies:
    4
    Views:
    110
  16. Nicole Seaman

    P1.T2.601. Variance reduction techniques (Brooks)

    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo...
    Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is...
    Replies:
    0
    Views:
    78
  17. Nicole Seaman

    P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each...
    Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should...
    Replies:
    4
    Views:
    121
  18. Nicole Seaman

    P1.T2.512. Autoregressive moving average (ARMA) processes

    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
    Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
    Replies:
    0
    Views:
    77
  19. David Harper CFA FRM

    P1.T2.511. First-order autoregressive, AR(1), process

    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    [USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
    Replies:
    8
    Views:
    154
  20. Nicole Seaman

    P1.T2.510. First-order and general finite-order moving average process, MA(1) and MA(q)

    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    If the roots are real and not complex, I believe.
    Replies:
    2
    Views:
    203
  21. Nicole Seaman

    P1.T2.509. Box-Pierce and Ljung-Box Q-statistics

    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne:) Jayanthi
    Replies:
    3
    Views:
    145
  22. Nicole Seaman

    P1.T2.508. Wold's theorem

    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    [USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
    Replies:
    4
    Views:
    210
  23. Nicole Seaman

    P1.T2.507. White noise

    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
    Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
    Replies:
    0
    Views:
    123
  24. Nicole Seaman

    P1.T2.506. Covariance stationary time series

    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope coefficient and a partial multivariate slope coefficient. We can extract correlation by multiplying...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation, then you can view this is analogous to the difference between (in a regression) a univariate slope...
    Hi [USER=46018]@ See below I copied Diebold's explanation for partial autocorrelation (which is excellent, in my opinion). If you keep in mind the close relationship between beta and correlation,...
    Replies:
    6
    Views:
    169
  25. Nicole Seaman

    P1.T2.505. Model selection criteria (Diebold)

    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables...
    Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e,...
    Replies:
    2
    Views:
    251
  26. Nicole Seaman

    P1.T2.504. Copulas (Hull)

    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
    Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
    Replies:
    25
    Views:
    851
  27. Nicole Seaman

    P1.T2.503. One-factor model (Hull)

    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
    @hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
    Replies:
    20
    Views:
    816
  28. Nicole Seaman

    P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models

    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription level: any XLS uploaded as part of the Q&A are meant to be available to all subscribers). In almost...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your help! :) FYI, we don't generally remove spreadsheets (and we would not do that due to subscription...
    @Annette007 That link (ie, ) still looks good to me, I'm not sure why you would get an error (?). As the XLS is a tiny file, I uploaded the it here for you also @emilioalzamora1 Thanks for your...
    Replies:
    21
    Views:
    543
  29. Nicole Seaman

    P1.T2.501. More Bayes Theorem (Miller)

    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection symbol isn't always handy: joint Prob(R, U) = 3.0%. The joint probabilities are inside the square...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection symbol isn't always handy: joint Prob(R, U) = 3.0%. The joint probabilities are inside the square...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint probabilities; e.g., joint Prob(R ∩ U) = 3.0%. BTW, I tend to write this with a comma because the intersection...
    Hi @jacobweiss2305 Yes, agreed, nice attention to detail. What Miller calls the probability matrix (your lower "Unconditional";) is, the way I look at, essentially a matrix of joint...
    Replies:
    11
    Views:
    333
  30. Nicole Seaman

    P1.T2.500. Bayes theorem

    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Testing Amazon link
    Replies:
    25
    Views:
    370

Thread Display Options

Loading...