What's new

1. ### YouTube T2-18b Univariate regression: Confidence interval of slope coefficient

The confidence interval (CI) of the slope coefficient is given by β(1) +/- Standard_Error[β(1)]*Z(α), where Z(α) is the student's t or normal deviate based on the desired confidence level; e.g., if the 2-sided confidence level is 95.0%, the Z(0.95) = 1.96. David's XLS: https://trtl.bz/2vEB0aE
2. ### YouTube T2-18 Regression: Significance Test of Slope Coefficient

The test statistic of the slope is given by (b1 - β)/SE(b1), although typically the null hypothesis is H(0):β = 0, such that the test statistic simply divides the regression coefficient by its own standard error (i.e., standard deviation of the estimate). This is compared to the student's t...
3. ### YouTube T2-17 Regression: R-squared

The R-squared (aka, coefficient of determination) is a goodness of fit measure. It gives the percentage of TOTAL variation that is explained by the regression line. Here is David's XLS: https://trtl.bz/2Exyu5c
4. ### YouTube T2-16 Regression: standard error of regression

The standard error of the regression (SER) is a key measure of the OLS regression line's "goodness of fit." The SER equals the square root of [sum of squared residuals (SSR) divided by the degrees of freedom (d.f.)], where d.f. is the number of observations minus the number of regression...
5. ### YouTube T2-15 Linear regression: OLS coefficients minimize the SSR

The ordinary least squares (OLS) regression coefficients are determined by the "best fit" line that minimizes the sum of squared residuals (SSR). David's XLS: https://trtl.bz/2uiivIm
6. ### YouTube T2-14 Linear regression: Sample Regression Function

In theory, there is one population (and one population regression function). Each sample varies and generates its own sample regression function (SRF). Therefore, the regression coefficients generated by the SRF are random variables; e.g., their standard deviations are the standard errors...
7. ### YouTube T2-13 Type I versus II error and power

Type I error mistakenly rejects the true null. The Type II error mistakenly accepts a false null. Significance, α, is the desired Prob[Type I error]. Power is 1 - β = 1 - Prob[Type II error] but is more difficult to compute because, while there is only one true null, there can be many false...
8. ### YouTube T2-12 The p value is the exact significance level

The p value is the area in the rejection region(s). In this example, we observe a sample mean of +15 bps and our null hypothesis is that the "true" population mean is zero. The corresponding p value of 2.36% is the exact (i.e., lowest) significance level at which we can reject the null. Put...
9. ### YouTube T2-11 Test of sample variance

If (we can assume) the population is normal, then the chi-square distribution can be used to test the sample variance (this is analogous to using the student's t for a test of the sample mean). David's XLS: http://trtl.bz/011018-yt-sample-variance-xls
10. ### YouTube T2-10 Test of sample mean (Confidence interval, test statistic and p-value)

The explores the answer to Miller's EOC Question #2: "You are given the following sample of annual returns for a portfolio manager. If you believe that the distribution of returns has been stable over time and will continue to be stable over time, how confident should you be that the portfolio...
11. ### YouTube T2-9c Bayes Theorem, Three-state variable

This explores the answer to Miller's sample question in Chapter 6 of http://amzn.to/2C88m0i. There are three types of managers: Out-performers (MO), in-line performers (MI) and under-performers (MU). The prior probability that a manager is an outperformer is 20.0%. But if we observe two years of...

Here is the question: "You are an analyst at Astra Fund of Funds. Based on an examination of historical data, you determine that all fund managers fall into one of two groups. Stars are the best managers. The probability that a star will beat the market in any given year is 75%. Ordinary...
13. ### YouTube T2-9 Bayes Theorem: Simple test for disease

Bayes Theorem updates a conditional probability with new evidence. In this case, the conditional probability (disease | positive test result) equals the joint probability (disease, positive test result) divided by the unconditional probability (positive test result). The question illustrated is...
14. ### YouTube T2-8 Covariance: population vs. sample, and relationship to correlation

Covariance is a measure of linear co-movement between variables. Independence implies zero covariance, but the converse is not necessarily true (because variables can be dependent in a non-linear way). Here is David's XLS: http://trtl.bz/2B9nqdO
15. ### YouTube T2-7 Kurtosis of a probability distribution

Kurtosis is the standardized fourth central moment and is a measure of tail density; e.g., heavy or fat-tails. Heavy-tailedness also tends to correspond to high peakedness. Excess kurtosis (aka, leptokurtosis) is given by (kurtosis-3). We subtract three because the normal distribution has...
16. ### YouTube T2-6 The skew (and sample skew) of a distribution

The skew is the third central moment divided by the cube of the standard deviation. Here I calculate skew using the binomial distribution.
17. ### YouTube T2-5 Variance of a discrete random variable

The variance is a key measure of dispersion, it is the expected value of the squared difference between each value and the mean. The population variance is the "true" variance, but realistically in most cases, we have a sample (rather than a population) such that our unbiased estimate of the...
18. ### YouTube T2-4 What is statistical independence?

Variables are independent if and only if (iff) their JOINT probability is equal to the product of their unconditional (aka, marginal) probabilities; i.e., if and only if Prob(X,Y) = Prob(X)*Prob(Y). Further, if variables are independent then their covariance (and correlation) is equal to zero...
19. ### YouTube T2-3 Probability Matrix

The probability matrix includes joint probabilities on the "inside" and unconditional (aka, marginal) probabilities on the outside. The key relationship is joint probability = unconditional * conditional. Here is David's XLS: https://www.dropbox.com/s/thqkesz65niutil/1204-yt-probability-matrix.xlsx
20. ### YouTube T2-2 Inverse transform method

The inverse transform method is simply a way to create a random variable that is characterized by a SPECIFICALLY desired distribution (it can be any distribution, parametric or empirical). For example, =NORM.S.INV(RAND()) transform a random uniform into a random standard normal. The "inverse"...