I give you an answer to calculate the coefficients using the inverse of the Covariance Matrix, which is also referred to as the Anti-Image Covariance Matrix
In simple linear regression: $Y=\beta_0+\beta_1X$
you can write $\beta_1=\frac{cov(x,y)}{var(x)}$ and then you easily obtain $\beta_0$ as $\overline{y}=\beta_0+\beta_1\overline{x}$
Now the problem if you have more than one predictor Variable as e.g. in your example: $Y=\beta_0 +\beta_1 X_2+\beta_2 X_2$ is that you can also have covariance between $X_1$ and $X_2$. So it is not anymore possible to simply set $\beta_1=\frac{cov(y,x_1)}{var(x_1)}$ and $\beta_2=\frac{cov(y,x_2)}{var(x_2)}$ due to the problem of Collinearity.
So what one would like to do is remove the linear influence of $X_2$ on $X_1$ and the other way around. So it turns out that we can calculate $\beta_1=\frac{cov(y,r_1)}{var(r_1)}$, where $r_1$ is the vector of residuals of the linear regression $Y=\beta X_1$, and analogously $\beta_2=\frac{cov(y,r_2)}{var(r_2)}$, where $r_2$ is the vector of residuals ofthe linear regression $Y=\beta X_2$. As I said, you can imagine this of removing the linear part of $X_1$ that is due to $X_2$ and the other way around.
I am now going to explain how this is connected with the Inverse of the Covariance Matrix, I will explain it in the 3d case because of notation but it works equally well for larger $n$. (This notation of using $X_1,X_2$now has nothing to do with the notation of your example)
Assume that you have a matrix $(X_1,X_2,X_3)$ with $X_i \in \mathbb{R}^k$. We denote with $C$ the covariance matrix of these 3 vectors, and with $\tilde{c_{i,j}}$ the elements of the inverse of $C$.
Now it turns out that $\tilde{c}_{i,i}= \frac{1}{var(\tilde{x}_i)}$, where $\tilde{x_i}$ is the residual vector of the linear regression $X_i=\beta_0 X_j+\beta_1 X_l$, where $i\neq l\neq j$ and $i,j,l \in \{1,2,3\}$, i.e. the diagonal elements are the inverse of the so called partial variances.
Furthermore $\tilde{c}_{i,j}=-\frac{cor(\tilde{x}_i,\tilde{x}_j)}{var(\tilde{x_i})var(\tilde{x_j})}$.
Note that the role of $\tilde{x_i}$ differs from $r_i$ above, as $\tilde{x_i}$ is the residual when one uses ALL other Collumns in the Matrix in the regression, while above for $r_i$ we only used the Collumns of the other PREDICTORS.
However due to some nice mathematical equalities it will still work out, as it holds that $$\frac{cov(y,r_1)}{var(r_1)}=\frac{cor(y,r_1)\sqrt{var(y)}\sqrt{var(r_1}}{var(r_1)}= \frac{cor(y,r_1)\sqrt{var(y}}{\sqrt{var(r_1)}})=\frac{-cov(\tilde{y},\tilde{x_1})}{var(\tilde{x_1})}= \quad -w_{1,2}\cdot\frac{1}{w_{2,2}}$$ when we denote with $\tilde{y} $ the residuals of predicting $Y=\beta_0 X_1 + \beta_1 X_2$ and with $\tilde{x_1}$ the residuals of $X_1 = \beta_0 Y + \beta_1 X_2$ and if we denote with $w_{i,j}$ the elements of the Inverse of the Covariance Matrix of $(Y,X_1,X_2)$ from your example. This would for example mean that $w_{1,1}=\frac{1}{var(\tilde{y})}$, or $w_{1,2}=\frac{-cor(\tilde{y},\tilde{x_1})}{\sqrt{var(\tilde{y})}\sqrt{var(\tilde{x_1})}}$.