Well, there are plenty of reasons to choose a different error distribution. But I believe that you aren't aware on why we have distributions for variables in the first place. If this is obvious to, I believe my answer is useless to you, sorry.
Why distributions are important
See, having distributions allows us to consider a model in a probabilistic, meaning we can quantify uncertainties about our model. When in stat 101 we learn that the sampling distribution of the sample mean $\bar{X} \dot{\sim} \mathcal{N}(\mu,\sigma)$ (asymptotically), we can, in a probabilistic framework, tell a lot of stuff about that estimate, like testing hypothesis, constructing confidence intervals.
Probabilistic Distributions in linear and generalized linear models
When in a linear model framework, we can basically do the same, if we know the the distribution of the error term. Why? This a result of linear combination of random variables (see this answer). But the point is, when this probabilistic structure is present in the model, we can again do sorts of stuff. Most notably, besides hypothesis testing and constructing C.I, we can build predictions with quantified uncertainty, model selection, goodness of fit testes and a bunch of other stuff.
Now why do we need GLMs specifically? Firstly, the probabilistic framework of a linear model can't handle different types of that, such as counts or binary data. Those types of data are intrinsically different them a regular continuous data, meaning its possible to have a height of 1.83 meters, but its senseless to have 4.5 electrical lights not working.
Therefore the motivation for GLMs starts with handling different types of data, primarily by the use of link functions or/and by cleverly manipulating the intended model to a linear known "framework". These needs and ideas are connected directly to how the errors are modeled by the "framework" being used.