The question is a very good one as you realise that not rejecting normality in a normality test does not guarantee you that the "true" distribution is "approximately Gaussian".
Unfortunately my answer to your question will be disappointing and to some extent worrying.
If you wanted to show convincingly, backed up by theory, that anything is truly "approximately Gaussian", you'd need to define formally what that means. Such a definition would normally state that the data come from some true distribution $P$ and involve a distance measure $d$ between distributions. Then it would state that there exists a normal distribution $Q$ with appropriate parameters so that $d(P,Q)$ is small enough.
Unfortunately, regardless of what $d$ is, no such thing can be shown for real data, because for starters the existence of a true underlying distribution $P$ is an idealisation, and cannot be secured to be true in reality. With the usual frequentist interpretation of probability, "real existence" of any probability would require infinite repetition of the data generation (in fact not only infinite but also "random" in a certain sense, which is hard to define as has been discussed in the literature on foundations of probability for a long time), which in reality does not happen. Any repetition is finite, and it cannot be guaranteed to be "random". So the very existence of a true probability distribution cannot be guaranteed, which would be a requirement for making sure that this distribution is "approximately something".
What you can do is to make sure that $d(P_n,Q)$ is small enough, where $P_n$ is the empirical distribution of your observed data points. This is in fact what some normality tests do. For example, Kolmogorov-Smirnov will reject normality if $d(P_n,Q)$ is too large, where $d$ is the Kolmogorov distance.
This unfortunately is weaker than what you want, because it does not imply that theory for a Gaussian random variable (i.i.d. draws from it) will approximately apply. See the answer of @Glen_b for an example; also you cannot secure independence (see https://doi.org/10.1007/s00362-023-01414-3 ), but any theory will fail if independence is critically violated (the very concept that $P_n$ represents $P$ well relies on i.i.d. sampling through Glivenko-Cantelli's Theorem). "Everything depends on everything else" in the real world said Thich Nhat Hanh, and I think he was right about this.
Unfortunately, despite the limitations of normality tests, you can therefore hardly come closer to what you want.
Ultimately we need to resort to Popper's idea of falsification. We cannot positively make sure that our theory holds, we can only put it to certain tests and see whether it is rejected.
As a side remark, it is good to have in mind also that for much theory that is based on normality, only certain deviations from normality (such as gross outliers, strong skewness, issues with independence) are problematic, whereas other deviations don't destroy results, at least not with reasonably large samples, due to the Central Limit Theorem. Standard tests of normality are not necessarily sensitive to detect the right issues, namely those that really cause trouble.
Furthermore, there is what I call "misspecification paradox", which says that if you do anything based on your data in order to decide whether data are "normal enough" or not, and then only if data look normal enough you do something that assumes normality in theory, this adds an additional problem. This is because standard theory does not take into account that there has been data based selection of what you do, and that in itself constitutes a violation of i.i.d. normality. See https://doi.org/10.52933/jdssv.v3i3.73
So the situation is a mess really, but of course still probability modelling helps as long as it helps...