Curious to see if anyone knows of some better examples, but I wanted to pitch in with my experience on this in real-life scenarios.
I should start by saying that in my experience this is rarely an issue for experienced modellers using well written solvers. It is exceedingly rare that we'd need that level of accuracy in practice, i.e., that the only to get the job done is to achieve that level of accuracy.
Fundamentally, the need for high levels of numerical accuracy arises in situations where we have numbers of significantly different orders of magnitude in our calculations, or significant accumulation of error. Sometimes that's ok and we can work with imprecisions, and sometimes it's not.
In general, we try to scale/formulate our models such that these large variations are not present, control the error using different algorithms, keep track of it using e.g. interval arithmetic, and so on. However, in certain cases this can't be easily worked around.
One algorithmic case where we encounter this is when generating convex relaxations of non-convex functions. Even if the solution is at $0$, the coefficients of the constraints we generate can easily be in the order of $1.e50$ or higher. In such cases we often can't trust the result of solving the relaxation, and we can easily fathom the global solution by accident if we do. The techniques we use to handle this could easily fill a few books, but we do handle it without actually achieving that level of accuracy.
One academic example of a physical case that I remember from back in the day is the minimisation of the Gibbs free energy of Lennard-Jones clusters, where the potential is given by the following formula:
$V_{LJ}(r) = 4\epsilon[(\frac{\sigma}{r})^{12}-(\frac{\sigma}{r})^6]$.
Numerically, there's all sorts of nasty in that formula. Note however that, while academically interesting, the model itself is pretty crude to begin with, hence the garbage-in garbage-out rule applies. Even if we did solve this with perfect accuracy, it's a poor representation of the physical system, hence the solution would not be very useful to describe anything interesting at macro-scale.
In other words, in a modelling context we always keep in mind that high accuracy can only ever be meaningful if the model itself is highly accurate to begin with.
In practice, we tend to trust calculations in ranges where floating point numbers are fairly well represented within $6-9$ orders of magnitude (e.g. $0-1.e6$ is a good range, $1.e302-1.e308$ is bad), and flag calculations which are potentially not trustworthy to be handled in other ways, if we need to, or avoid them entirely.