When the null hypothesis really is true, test p-values tend to have $U(0,1)$ distributions, no matter the sample size, so the test is just as unlikely to reject a true null hypothesis when the sample size is large as when the sample size is small.
Consequently, when your test comes back and tells you that the null hypothesis is false, that is a credible rejection, since the test does not give a particularly high false rejection rate.
If you are content to have the usual tests reject yet not flag tiny deviations from the null hypothesis, I see a few options.
Formal equivalence testing, such as two one-sided tests (TOST), if you have some sense of how close qualifies as "close enough".
Setting a low $\alpha$ level, perhaps much lower than $\alpha = 0.05$, so that small deviations that are not of practical significance do not get flagged as "significant". A danger here is that backing out the $\alpha$ requires knowledge of the variance, which you can estimate from data or predict from domain knowledge but probably do not know.
- Calculate a confidence or credible interval to show that the interval of plausible values contains values that are trivially different from the value under the null hypothesis.
However, your hypothesis test is doing what it should be doing. You fed it data from a distribution that has a nonzero mean, and it determined that the mean is not zero. I stand by what I wrote here about the Princess and the Pea.