What's new

Variance property question

kevolution

Member
In the study notes it mentions var(X-Y) = var(X) + var(Y). Is this a typo? Or is the RHS really suppose to be addition not subtraction. Thanks!

brian.field

Well-Known Member
Subscriber
Not a typo! It is a good question. Var(aX+bY) = (a^2)var(X) + (b^2)var(Y) so we know the following as well.

Var(aX-bY) = (a^2)var(X) + ((-b)^2)var(Y)

kevolution

Member
Ah that totally makes sense! Thanks Brian David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
I just wanted to add, and i assume the note reflects, that var(X-Y) = var(X) + var(Y) and Var(aX+bY) = (a^2)var(X) + (b^2)var(Y) assume independence -> COV(X,Y) = 0.
Brian's formula is a special case of var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y) where the third term drops out if X and Y are independent. If they aren't independent, we can continue with his logic (!):
var(aX - bY) = var[aX + (-b)Y] = a^2*var(X) + (-b)^2*var(Y) + 2*a*(-b)*cov(X,Y) = a^2*var(X) + b^2*var(Y) - 2*a*b*cov(X,Y)

... while i'm here, I do like to remind that var(X) = cov(X,X), so we can generalize this even further if we refer to a covariance property (the only reason to do this, I suppose, is for a deeper understanding), see https://en.wikipedia.org/wiki/Covariance#Properties where we have a pretty cool expansion:
cov(aX + bY, cW + dV) = ac*cov(X,W) + ad*cov(X,V) + bc*cov(Y,W) + bd*cov(Y,V); but if we want var(aX + bY) that is just cov(aX + bY, aX + bY) which becomes:
cov(aX + bY, aX + bY) = a^2*cov(X,X) + ab*cov(X,Y) + ba*cov(Y,X) + b^2*cov(Y,Y)
= a^2*cov(X,X) + b^2*cov(Y,Y) + 2*ab*cov(X,Y)
= a^2*var(X) + b^2*var(Y) + 2*ab*cov(X,Y). I hope that's interesting!

Last edited:

brian.field

Well-Known Member
Subscriber
I just wanted to add, and i assume the note reflects, that var(X-Y) = var(X) + var(Y) and Var(aX+bY) = (a^2)var(X) + (b^2)var(Y) assume independence -> COV(X,Y) = 0.
Brian's formula is a special case of var(aX + bY) = a^2*var(X) + b^2*var(Y) + 2*a*b*cov(X,Y) where the third term drops out if X and Y are independent. If they aren't independent, we can continue with his logic (!):
var(aX - bY) = var[aX + (-b)Y] = a^2*var(X) + (-b)^2*var(Y) + 2*a*(-b)*cov(X,Y) = a^2*var(X) + b^2*var(Y) - 2*a*b*cov(X,Y)

... while i'm here, I do like to remind that var(X) = cov(X,X), so we can generalize this even further if we refer to a covariance property (the only reason to do this, I suppose, is for a deeper understanding), see https://en.wikipedia.org/wiki/Covariance#Properties where we have a pretty cool expansion:
cov(aX + bY, cW + dV) = ac*cov(X,W) + ad*cov(X,V) + bc*cov(Y,W) + bd*cov(Y,V); but if we want var(aX + bY) that is just cov(aX + bY, aX + bY) which becomes:
cov(aX + bY, aX + bY) = a^2*cov(X,X) + ab*cov(X,Y) + ba*cov(Y,X) + b^2*cov(Y,Y)
= a^2*cov(X,X) + b^2*cov(Y,Y) + 2*ab*cov(X,Y)
= a^2*var(X) + b^2*var(Y) + 2*ab*cov(X,Y). I hope that's interesting!
Oh man David - this is the good stuff!

Last edited by a moderator: