# P1.T2. Quantitative Analysis

Practice questions for Quantitative Analysis: Econometrics, MCS, Volatility, Probability Distributions and VaR (Intro)

Sort By:
Title
Replies Views
Last Message
1. ### R16.P1.T2. Hull - expected value of u(n+t-1)^2

You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is modelling the return for a time period dt: u(t) = ( s(t) - s(t - dt) ) / s(t - dt) that means, that E( u(t)...
You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E( u(t) )^2 can be neglected against the E( u(t)^2 ) term than you get to your result. u(t) is...
You need the assumption, that the drift term of u can be neglected. If u(t) is a random variable than is it's variance defined as σ(t)^2 = E( u(t)^2 ) - E( u(t) )^2 If you now assume, that the E(...
Replies:
1
Views:
16
2. ### Miller, Chapter 2 video: Probabilities

Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Amazing response @David Harper CFA FRM . Thanks so much.
Replies:
4
Views:
26
3. ### P1.T2.305. Minimum variance hedge (Miller)

Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to may be light in this area. Thanks for the deep dive on the great, albeit challenging, questions!
Thanks David! I did just read that update...fortunately calc is slowly coming back to me with each example. Based on Bill's comments it would seem some other prep providers I have had access to...
Replies:
4
Views:
32
4. ### Variance and Covariance Calculation Clarification

Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was with samples it may be difficult to assign a true distribution, so instead for a sample mean, you...
Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was with samples it may be difficult to assign a true distribution, so instead for a sample mean, you...
Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can be described as: Pr*X^2-(sum(Pr*X))^2. sum(Pr*X) = µ What you were showing in the second example was...
Hi David, Thanks! I work in Excel every day so being able to look at the numbers was a big help. What I was describing in the first part can be summed up as: Pr*(X-µ)^2 The second equation can...
Replies:
3
Views:
22
5. ### Uses of the Probability Density Function versus the Cumulative Distribution Function

a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the derivative: if F(X) = 1/6*x, then f(X) = F'(X) = 1/6; ie the pmf of a fair die is f(x) = 1/6 if f(x) = ax +...
a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the derivative: if F(X) = 1/6*x, then f(X) = F'(X) = 1/6; ie the pmf of a fair die is f(x) = 1/6 if f(x) = ax +...
a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF is F(X) = X/6; i.e., the probability of rolling a three or less is 3/6 = 50% the pmf is the...
a discrete distribution has a pmf (probability mass function) instead of a prob density function (pdf) which is its continuous analog. An easy example of pmf/CDF is a fair six-sided die: the CDF...
Replies:
4
Views:
27
6. ### P1.T2.602. Bootstrapping (Brooks)

a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! Imagine a simulation of earthquakes or flood levels or survival in space.....
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be used to generate samples from parametric distributions when actual samples are difficult to obtain! ...
a GARCH process is covered in the readings.... Simulations are used to produce samples from distributions that are not parametric or not in "closed form" or, perhaps better, simulations can be...
Replies:
4
Views:
98
7. ### P1.T2.601. Variance reduction techniques (Brooks)

Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them. Questions: 601.1. Betty is an analyst using Monte Carlo simulation to...
Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective. Describe the benefits of reusing sets of random number draws across Monte Carlo...
Learning objectives: Explain how to use antithetic variate technique to reduce Monte Carlo sampling error. Explain how to use control variates to reduce Monte Carlo sampling error and when it is...
Replies:
0
Views:
69
8. ### P1.T2.600. Monte Carlo simulation, sampling error (Brooks)

Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each of the TRUE (A), (B), and (D), specifically:
Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should read "In regard to true (A), (B), and (D), ..." You might notice that the explanation itemizes each...
Thank you @QuantMan2318 , nice reasoning! @ (cc [USER=27903]@Nicole Manley ) The answer is given correctly as (C) which is false. But there was a typo, consistent with the text given, it should...
Replies:
4
Views:
108
9. ### P1.T2.512. Autoregressive moving average (ARMA) processes

Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR processes observed subject to measurement error also turn out to be ARMA processes b. When we need...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the following is a motivating for an autoregressive moving average (ARMA) process EXCEPT which is not? a. AR...
Learning outcomes: Define and describe the properties of the autoregressive moving average (ARMA) process. Describe the application of AR and ARMA processes. Questions: 512.1. Each of the...
Replies:
0
Views:
72
10. ### P1.T2.511. First-order autoregressive, AR(1), process

[USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
[USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
[USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
[USER=38486]@ Yes, if you look at the GARP curriculum for this year, you will see that these learning objectives are still under Topic 2, Reading 16, Diebold, Chapter 8. Thank you, Nicole
Replies:
8
Views:
140
11. ### P1.T2.510. First-order and general finite-order moving average process, MA(1) and MA(q)

If the roots are real and not complex, I believe.
If the roots are real and not complex, I believe.
If the roots are real and not complex, I believe.
If the roots are real and not complex, I believe.
Replies:
2
Views:
179
12. ### P1.T2.509. Box-Pierce and Ljung-Box Q-statistics

Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Hi Joyce, Wonder how I made a mistake - yes, you are right, I was looking at Chi-square 95%, 24 instead of Chi-square 5%, 24 = 36.415! Thanks a tonne Jayanthi
Replies:
3
Views:
128
13. ### P1.T2.508. Wold's theorem

[USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
[USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
[USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
[USER=42750]@ Okay great. No worries, honestly I learn something new almost every time that I take a fresh look at something! Good luck with your studies ...
Replies:
4
Views:
172
14. ### P1.T2.507. White noise

Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true EXCEPT which is false? a. If a process is zero-mean white noise, then is must be Gaussian white...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag operator works. Questions: 507.1. In regard to white noise, each of the following statements is true...
Learning outcomes: Define white noise, describe independent white noise and normal (Gaussian) white noise. Explain the characteristics of the dynamic structure of white noise. Explain how a lag...
Replies:
0
Views:
106
15. ### P1.T2.506. Covariance stationary time series

Highly appreciate if you can paste the definition here related to 506.3 please.
Highly appreciate if you can paste the definition here related to 506.3 please.
Highly appreciate if you can paste the definition here related to 506.3 please.
Highly appreciate if you can paste the definition here related to 506.3 please.
Replies:
4
Views:
139
16. ### P1.T2.505. Model selection criteria (Diebold)

Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables (including), and four parameters (slope b, m1, m2, m3). The degrees of freedom, df = n-4 because four...
Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e, is a regression model with three independent variables (x1, x2, x3), four total variables...
Hi @DTu Yes, but depending on the author, (k) can is sometimes defined as the number of independent variables or the number of parameters. For example, consider y = b + m1*x1 + m2*x2 + m3*x3 + e,...
Replies:
4
Views:
225
17. ### P1.T2.504. Copulas (Hull)

Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more difficult than the questions that you will see on the exam, the concepts are still testable, as they...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher level to ensure that our members understand the concepts in depth. So while this question may be more...
Hello The practice questions that David writes are focused around the learning objectives in the GARP curriculum, but many times, his questions are more difficult. He writes them at a higher...
Replies:
30
Views:
750
18. ### P1.T2.503. One-factor model (Hull)

@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean + (SD*e2) V= 10 + [6*(-0.16609)] V= 9.00346 Thanks, Rajiv
@hellohi, This is how I have solved: e1=z1= -0.88 e2= pz1 + z2*sqrt(1-p^2) e2= [0.70*(-0.88)] + [0.63*sqrt(1-(0.7)^2) e2= -0.16609 U= Mean + (SD*e1) U= 5 + [3*(-0.88)] U= 2.36 V= Mean +...
Replies:
20
Views:
712
19. ### P1.T2.502. Covariance updates with EWMA and GARCH(1,1) models

that helps much .....thanks a lot dear deepak....
that helps much .....thanks a lot dear deepak....
that helps much .....thanks a lot dear deepak....
that helps much .....thanks a lot dear deepak....
Replies:
18
Views:
445
20. ### P1.T2.501. More Bayes Theorem (Miller)

Thank you very much Ami. I mangled the formula.
Thank you very much Ami. I mangled the formula.
Thank you very much Ami. I mangled the formula.
Thank you very much Ami. I mangled the formula.
Replies:
7
Views:
266

Replies:
25
Views:
340
22. ### P1.T2.409 Volatility, GARCH(1,1) and EWMA

Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha - gamma)] = sqrt[0.0000960/(1 - 0.060 - 0.880)] = sqrt[0.0000960/0.060] = 4.0% (+1 star for @Robert...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega = long-run variance*gamma, the long-run volatility = SQRT[omega/gamma] = sqrt[omega/(1 - alpha -...
Per @Robert Paterson 's correction, the first bullet under 409.2.A corrected to read: In regard to (a), this is FALSE: because the weights sum to one (i.e., alpha + beta + gamma = 1.0) and omega...
Replies:
2
Views:
132
23. ### P1.T2.408. Multivariate linear regression

In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large standard errors of the coefficients (independent variables), rendering low t statistics for them,...
In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large standard errors of the coefficients (independent variables), rendering low t statistics for them,...
In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not saying D is incorrect. D is obviously correct as in case of multicollinearity, there will be an large...
In case of heteroskedasticity there will be always a downward bias on the standard error, making the T statistics (and obviously F statistics) higher. So, in that way, B is correct too. I am not...
Replies:
7
Views:
160
24. ### P1.T2.407. Univariate linear regression

Hi @onion Your observation is very tempting but, to my knowledge, there is no assumption of normality required an any of the CLRM assumptions that we have studied in the FRM (I do realize it is sometimes attached to anticipate small sample sizes). Perhaps the question is too subtle to be fair. However, strictly speaking, our key requirement of the error term is that (i) its conditional mean be...
Hi @onion Your observation is very tempting but, to my knowledge, there is no assumption of normality required an any of the CLRM assumptions that we have studied in the FRM (I do realize it is sometimes attached to anticipate small sample sizes). Perhaps the question is too subtle to be fair. However, strictly speaking, our key requirement of the error term is that (i) its conditional mean be...
Hi @onion Your observation is very tempting but, to my knowledge, there is no assumption of normality required an any of the CLRM assumptions that we have studied in the FRM (I do realize it is sometimes attached to anticipate small sample sizes). Perhaps the question is too subtle to be fair....
Hi @onion Your observation is very tempting but, to my knowledge, there is no assumption of normality required an any of the CLRM assumptions that we have studied in the FRM (I do realize it is...
Replies:
10
Views:
196
25. ### P1.T2.406. Distributions II

Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to test the null hypothesis that the (underlying population) variances are equal. If the population...
Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to test the null hypothesis that the (underlying population) variances are equal. If the population...
Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e., we just take the ratio of the two sample variances, and this F-ratio (aka, variance ratio) is used to...
Hi @fjc120 F-distiribution is on page 60 of P1.T2. Miler (see below). It's also in the Miller reading, although personally I do not find Miller's explanation awesomely helpful. See also ; i.e.,...
Replies:
21
Views:
243
26. ### P1.T2.405. Distributions I

HI @theproman23 There is no sample so there is no standard error; question 405.1 is just asking about the properties of the given distribution. To contrast, let me ask a question that does invoke the standard error (which, in this case, is the standard deviation of a sample mean not a population). Here is the alternate question just for contrast: Assume a population with mean earnings of $2.5... HI @theproman23 There is no sample so there is no standard error; question 405.1 is just asking about the properties of the given distribution. To contrast, let me ask a question that does invoke the standard error (which, in this case, is the standard deviation of a sample mean not a population). Here is the alternate question just for contrast: Assume a population with mean earnings of$2.5...
HI @theproman23 There is no sample so there is no standard error; question 405.1 is just asking about the properties of the given distribution. To contrast, let me ask a question that does invoke the standard error (which, in this case, is the standard deviation of a sample mean not a...
HI @theproman23 There is no sample so there is no standard error; question 405.1 is just asking about the properties of the given distribution. To contrast, let me ask a question that does invoke...
Replies:
14
Views:
381
27. ### P1.T2.404. Basic Statistics

Hi @theproman23 Yes, for observations i = {1 ....n), the sample variance is [Σ (Xi - µ)^2] /(N-1), where the numerator is the sum of squared differences (from the sample mean, µ, which i am here using to denote sample mean although it should be x-bar). Sample standard deviation is the square root of the sample variance. This question is testing logic against an understanding of these...
Hi @theproman23 Yes, for observations i = {1 ....n), the sample variance is [Σ (Xi - µ)^2] /(N-1), where the numerator is the sum of squared differences (from the sample mean, µ, which i am here using to denote sample mean although it should be x-bar). Sample standard deviation is the square root of the sample variance. This question is testing logic against an understanding of these...
Hi @theproman23 Yes, for observations i = {1 ....n), the sample variance is [Σ (Xi - µ)^2] /(N-1), where the numerator is the sum of squared differences (from the sample mean, µ, which i am here using to denote sample mean although it should be x-bar). Sample standard deviation is the square...
Hi @theproman23 Yes, for observations i = {1 ....n), the sample variance is [Σ (Xi - µ)^2] /(N-1), where the numerator is the sum of squared differences (from the sample mean, µ, which i am here...
Replies:
2
Views:
158
28. ### P1.T2.403. Probabilities

Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)= P(default, speculative)/P(speculative). Such that we can also express Bayes as: P(speculative|default)...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice: P(default|speculative)*P(speculative) = P(default, speculative); i.e., alternatively P(default|speculative)=...
Hi @superpocoyo Right, I simply "condensed" the same idea; i.e., you are correct that per Bayes: P(speculative|default) = P(default|speculative)*P(speculative)/P(default) But notice:...
Replies:
11
Views:
287
29. ### P1.T2.402. Random number generators

AIMs: Describe the inverse transform method and its implementation in discrete and continuous distributions. Describe standards for an effective pseudorandom number generator and explain midsquare technique and congruential pseudorandom number generators. Describe quasi-random (low-discrepancy) sequences and explain how they work in simulations. Explain the mechanics and characteristics of the...
AIMs: Describe the inverse transform method and its implementation in discrete and continuous distributions. Describe standards for an effective pseudorandom number generator and explain midsquare technique and congruential pseudorandom number generators. Describe quasi-random (low-discrepancy) sequences and explain how they work in simulations. Explain the mechanics and characteristics of the...
AIMs: Describe the inverse transform method and its implementation in discrete and continuous distributions. Describe standards for an effective pseudorandom number generator and explain midsquare technique and congruential pseudorandom number generators. Describe quasi-random (low-discrepancy)...
AIMs: Describe the inverse transform method and its implementation in discrete and continuous distributions. Describe standards for an effective pseudorandom number generator and explain midsquare...
Replies:
0
Views:
96

Thks
Thks
Thks
Thks
Replies:
3
Views:
24