Have you wondered ‘Where do the covariances of the predictor variables go in regression?’ or ‘How does regression analysis produce the unique effects of each predictor on the outcome variable?’ Awesome. You’re a normal psychologist asking a really important question.

The simple answer is that the covariances among predictors are accounted for during parameter estimation (which is just a fancy way of saying ‘It just does’, and you can get away with that answer as a psychological researcher). Read on to learn more about this.

Consider a regression with two predictors () and an outcome variable (), such that

Equation 1:

Within the Equation 1, is the expected value of when and equal zero, represents the unique effect of on , and indicates the unique effect of on .

Therefore, when fitting the model in Equation 1, how does regression analysis account for the covariance between and ?

I’m going to simplify this blog post by only considering *z*-score versions of each of the variables. Let

and substitute into Equation 1, then we have

Equation 2:

Equation 2 does not include an intercept; when the predictors and outcome variable are *z*-scores, the intercept equals 0.

Regression analysis selects the estimates for and by minimizing the residual variance (i.e., ). In particular, regression analysis chooses the and that minimizes .

Equation 3: .

We can rewrite the residual variance of Equation 3 in terms of the regression equation from Equation 2 as

Equation 4:

Let’s understand this equation a bit further. First, let’s square the summand on the right-hand side of Equation 4.

Equation 5:

Distributing the summation operator in Equation 5 produces

Equation 6:

We can extract out each of the regression coefficients from the summands on the right hand size of equation 6 to obtain

Equation 7:

Many of the terms within Equation 7 can be simplified. We know that

Therefore, we can simplify Equation 7 to

Equation 7: $latex\sigma^2_{\epsilon} = \sigma^2_Y +b_1^2(\sigma^2_{X_1}) +b_2^2(\sigma^2_{X_2}) -2b_1\sigma_{Y,X_1}-2b_2\sigma_{Y,X_2} +2b_1b_2\sigma_{X_1,X_2}&bg=ffffff$

Thus, by squaring all of the terms within the regression model (i.e., Equations 4-5), the covariances are among the predictors variables (e.g., ) are directly accounted for.

Note that this is all way, way clearer within an SEM framework****.

*Ok, it was just Simine.

**Mean-centering. No interactions. Mind. Blown.

***So far I’ve just been answering the fake question.

****I smell another blog post….