The $\mathbf M = \mathbf I-\mathbf X(\mathbf X'\mathbf X)^{-1}\mathbf X'$ matrix is the "annihilator" or "residual maker" matrix associated with matrix $\mathbf X$. It is called "annihilator" because $\mathbf M\mathbf X =0$ (for its own $X$ matrix of course). Is is called "residual maker" because $\mathbf M \mathbf y =\mathbf {\hat e}$, in the regression $\mathbf y = \mathbf X \beta + \mathbf e$.
It is a symmetric and idempotent matrix. It is used in the proof of the Gauss-Markov theorem.
Also, it is used in the Frisch–Waugh–Lovell theorem, from which one can obtain results for the "partitioned regression", that says that in the model (in matrix form)
$$\mathbf y = \mathbf X_1\beta_1 + \mathbf X_2\beta_2 + \mathbf u$$
we have that
$$\hat \beta_1 = (\mathbf X_1'\mathbf M_2\mathbf X_1)^{-1}(\mathbf X_1'\mathbf M_2)\mathbf y $$
Since $\mathbf M_2$ is idempotent we can re-write the above by
$$\hat \beta_1 = (\mathbf X_1'\mathbf M_2\mathbf M_2\mathbf X_1)^{-1}(\mathbf X_1'\mathbf M_2\mathbf M_2)\mathbf y$$
and since $M_2$ is also symmetric we have
$$\hat \beta_1 = ([\mathbf M_2\mathbf X_1]'[\mathbf M_2\mathbf X_1])^{-1}([\mathbf M_2\mathbf X_1]'[\mathbf M_2\mathbf y]$$
But this is the least-squares estimator from the model
$$[\mathbf M_2\mathbf y] = [\mathbf M_2\mathbf X_1]\beta_1 + \mathbf M_2\mathbf u$$
and also $\mathbf M_2\mathbf y$ are the residuals from regressing $\mathbf y$ on the matrix $\mathbf X_2$ only.
In other words:
1) If we regress $\mathbf y$ on the matrix $\mathbf X_2$ only, and then regress the residuals from this estimation on the matrix $\mathbf M_2\mathbf X_1$ only, the $\hat \beta_1$ estimates we will obtain will be mathematically equal to the estimates we will obtain if we regress $\mathbf y$ on both $\mathbf X_1$ and $\mathbf X_2$ together at the same time, as a usual multiple regression.
Now, assume that $\mathbf X_1$ is not a matrix but just one regressor, say $\mathbf x_1$. Then $\mathbf M_2 \mathbf x_1$ is the residuals from regressing the variable $X_1$ on the regressor matrix $\mathbf X_2$. And this provides the intuition here: $\hat \beta_1$ gives us the effect that "the part of $X_1$ that is unexplained by $\mathbf X_2$" has on "the part of $Y$ that is left unexplained by $\mathbf X_2$".
This is an emblematic part of classic Least-Squares Algebra.