The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is not an inherent propery of the $p$-value because it requires an additional criterion set by the researcher.
What makes more sense is to talk about the evidence against the null hypothesis provided by the $p$-value. If we adopt the view$^{[1,2]}$ that the $p$-value is a continuous measure of compatibility between our data and the model (including the null hypothesis), it makes sense to talk about various degrees of evidence against $H_0$. Personally, I like the approach of Rafi & Greenland$^{[1]}$ to transform the $p$-value into (Shannon) surprise as $s=-\log_2(p)$ (aka Shannon information). For an extensive discussion on the distinction of $p$-values for decision and $p$-values as compatibility measures, see the recent paper by Greenland$^{[2]}$. This provides an absolute scale on which to view the information that a specific $p$-value provides. If a single coin toss provides $1$ bit of information, a $p$-value of, say, $0.05$ provides $s=-\log_2(0.05)=4.32$ bits of information against the null hypothesis. In other words: A $p$-value of $0.05$ is roughly as surprising as seeing all heads in four tosses of a fair coin.
This approach makes it very clear that the evidence provided by a $p$-value is nonlinear. For example: A $p$-values $0.10$ provides $3.32$ bits of information whereas a $p$-value of $0.15$ provides $2.74$ bits. The first $p$-value thus provides roughly $21$% more evidence against $H_0$ as the second. In a second example, a $p$-value of $0.001$ provides roughly $132$% more evidence than a $p$-value of $0.051$, despite the absolute difference between them being the same as in the first example ($0.05$). Here is an illustration from paper $[1]$:

To answer the question: As long as the $p$-value is smaller than $1$, it provides some evidence against the null hypothesis because it shows some incompatibility between the data and the model. To say "no evidence" would therefore not be entirely accurate.
References
$[1]$: Rafi, Z., Greenland, S. Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. BMC Med Res Methodol 20, 244 (2020). https://doi.org/10.1186/s12874-020-01105-9
$[2]$: Greenland, S. (2023). Divergence versus decision P-values: A distinction worth making in theory and keeping in practice: Or, how divergence P-values measure evidence even when decision P-values do not. Scand J Statist, 50( 1), 54– 88. https://doi.org/10.1111/sjos.12625