The KPSS statistic is a (LM-type) test statistic that rejects for large values (you can for example infer that from the fact that the smaller the level, the larger the critical value) of the test statistic. Hence, quantiles in the right tail and small p-values are of interest.
These however arise from a nonstandard distribution derived by KPSS and are found via stochastic simulation, so we cannot as easily produce critical or p-values as we can for, say, a normally distributed statistic by just invoking qnorm or pnorm. (The idea is similar to that for unit root tests, see e.g. How is the augmented Dickey–Fuller test (ADF) table of critical values calculated?)
However, given that we know that your test statistic is (way) smaller than the 10% critical value, we know that your p-value is (way) bigger than 10%. So, unless you were willing to reject at a nominal level (way) larger than 10% (and basically nobody is), you know that you cannot reject the null, and that likely is all you want to know from running the test. (It would for example be a problem if you were to use p-values further in, say, meta-analyses such as those discussed in Can a meta-analysis of studies which are all "not statistically signficant" lead to a "significant" conclusion?, for which you of course need more precise p-values.)
So your final result is not affected, I suppose.