- ...if we failed to reject $H_0$, wouldn't this mean that we would "probably erroneously" have to assume $\mu = 188$...
It means that your data, which indicated a smaller size than 188 cm, is not statistically significant. If the 188 cm would be true, then the magnitude of the observed discrepancy in the sample with that value of 188 cm may occur with a reasonable probability.
The sample of size 3 is not a good indicator that allows you to say that 188 cm is false. That is different from saying that 188 cm is true.
- is it possible to perform a hypothesis test with no prior knowledge?
There will always be assumptions about the model. For instance in your example there is an assumption that the height has a variance of 50 and can be approximated with a normal distribution.
However, for hypothesis testing there is no need for a prior such as in Bayesian analysis, where a hypothesis or a range of hypotheses is related with a probability (density) of being true.
Informally there might be ideas about prior probabilities for hypotheses. For instance, choices for significance levels (the cutoff value below which a p-value needs to be in order for us to make a decision) that are appropriate to use, are based on practical information. If in practice too many hypotheses are falsely rejected (afterwards we find out that there's too many false negatives), then people might decide to use a different (reduced) level of significance.
didn't they make a mistake and flip the hypotheses around?
You can have a way to more or less accept the hypothesis 188 cm. Or more precisely reject that the value is far away from 188 cm, and accept the hypothesis that the value is within some range around 188 cm.
This relates to tests for equivalence. See it for instance explained in an answer to this question: Why are standard frequentist hypotheses so uninteresting?
An example is two one-sided t-tests for equivalence testing and can be explained with the following image and can be considered as testing three hypotheses instead of two for the absolute difference
$$\begin{array}{}H_0&:& \text{|difference|} = 0\\
H_\epsilon&:& 0 <\text{|difference|} \leq \epsilon\\
H_\text{effect}&:& \epsilon < \text{|difference|} \end{array}$$
Below is a sketch of the position of the confidence interval within these 3 regions (unlike the typical sketch of TOST, there are actually 5 situations instead of 4).

The point of observations and experiments is to find a data driven answer to questions by excluding/eliminating what is (probably) not the answer (Popper's falsification).
Null hypothesis testing does this in a somewhat crude manner and does not differentiate between the situations B, C, E. However, in many situations this is not all too much of a problem. In a lot of situations the problem is not to test tiny effects with $H_0: |\mu-\mu_0|<\epsilon$. The effect size is expected to be sufficiently large and above some $\epsilon$. In many practical cases testing $|\text{difference}| > \epsilon$ is nearly the same as $|\text{difference}| > 0$ and the null hypothesis test is a simplification. It is in the modern days of large amounts of data that effect sizes of $\epsilon$ play a role in results.