I have seen the continuous mapping theorem (CMT) used to justify the convergence in probability of the difference of two sequences of random variables when it is known that each sequence converges in probability:
If $X_{1}, \dotsb, X_{n} \sim \text{Unif}(\theta_{1}, \theta_{2})$ iid, then one can show that order statistics $X_{(1)} \to \theta_{1}$ and $X_{(n)} \to \theta_{2}$. Then, by the CMT, $X_{(n)} - X_{(1)} \to \theta_{2} - \theta_{1}$.
I am not really comfortable with this last step, because the function that is being applied to each sequence is also a function of another sequence. I'd appreciate if someone could make the argument more explicit.