Typically, $x$ denotes the sample actually observed, while $X$ denotes the random variable, which could take other values were we (able) to draw repeatedly from the underlying distribution.
For example, $T(x)$ could be a t-statistic for a sample actually observed. When we compute a $p$-value for that statistic, we seek the probability that the test statistic could take more "extreme" values (extreme in the sense of being even less compatible with the null tested than $T(x)$), where the probability is taken over the distribution of $X$, assuming the null is actually true.
For concreteness, say you want to test $H_0:\mu\geq0$ against $\mu<0$ for $X_i\sim N(\mu,1)$ (so for simplicity, assume the variance to be known and equal to one). Then, $T(X)=\sqrt{n}\bar{X}$ is distributed as $N(0,1)$ if $\mu=0$. So the probability that $T(X)$ takes even more negative values (i.e., even less compatible with non-negative means $\mu$) than $T(x)$ is simply $\Phi(t(x))$.