in computing the standard error, in the lecture we have

se(b0) = sqrt(var(b0)).

I am wondering if it is not

se(b0) = sqrt(var(b0))/sqrt(n)

Because in the CLT we have :

Let X1,X2....Xn be n random variables with a sample mean Xmean, a mean mu and a variance of sigma^2

we have.

sqrt(n) ( Xmean - mu) ==> N(0,sigma^2)

mu = Xmean +/- Z(alpha/2) x sigma/sqrt(n)

hence the standard error is sqrt of the variance and divided by the sqrt of n.

Can you please let me know where I am wrong