The estimated regression coefficients are functions of the data (both the response and explanatory variables). The regression model specifies the conditional distribution of the response given the explanatory variables, so this implies a particular conditional distribution for the estimated regression coefficients given the explanatory variables. That is, in general, if the regression model fully specifies the distribution of $\mathbf{y} | \mathbf{x}$, then this implies a distribution for $\hat{\boldsymbol{\beta}}(\mathbf{y}, \mathbf{x}) | \mathbf{x}$.
When using Gaussian linear regression (the standard model form) we have $\mathbf{y} | \mathbf{x} \sim \text{N}(\mathbf{x} \boldsymbol{\beta}, \sigma^2 \mathbf{I})$. If we estimate using the OLS estimator for the regression coefficients then the estimator is a linear function of the error vector. Consequently, it turns out that the resulting vector of estimated regression coefficients is also normally distributed, with distribution:
$$\hat{\boldsymbol{\beta}}(\mathbf{y}, \mathbf{x}) \sim \text{N} (\boldsymbol{\beta}, (\mathbf{x}^\text{T} \mathbf{x})^{-1} \sigma^2 ).$$
Now, it also turns out that the normality of the vector of estimated regression coefficients is quite robust to the model assumptions. Even if the error terms in the model are not normally distributed, if you have a large amount of data then there is a variation of the central limit theorem that ensures that the estimated regression coefficients are close to normal. (Actually, consistency of the estimator requires conditions on the explanatory variables; see this related answer.)