There is no general answer to this, as it depends on each individual situation. In some situations, working with sample spaces is enough, such as computing the probability that a certain card shows up in a random subset of a deck of 52 cards. In other situations, things get much more complicated, such that even random variables are not enough. In this case, you would need to consider the measure-theoretic foundations. To quote section 1 of chapter 1 of A User's Guide to Measure Theoretic Probability by David Pollard:
For a rigorous treatment of probability, the measure theoretic approach is a vast improvement over the arguments usually presented in undergraduate courses. Let me remind you of some difficulties with the typical introduction to probability.
Independence
There are various elementary definitions of independence for random variables. For example, one can require factorization of distribution functions,
$$
\Pr(X \leq x, Y \leq y) = \Pr(X \leq x) \cdot \Pr(Y \leq y) \ \text{for all real $x,y$}
$$
The problem with this definition is that one needs to be able to calculate distribution functions, which can make it impossible to establish rigorously some desirable properties of independence. For example, suppose $X_1,\dots,X_4$ are independent random variables. How would you show that
$$
Y = X_1X_2 \left[\log\left(\frac{X_1^2 + X_2^2}{|X_1| + |X_2|}\right) + \frac{|X_1|^3 + X_2^3}{X_1^4 + X_2^4}\right]
$$
is independent of
$$
Z = \sin\left[X_3 + X_3^2 + X_3X_4 + X_4^2 + \sqrt{X_3^4 + X_4^4}\right]
$$
by means of distribution functions? Somehow you would need to express events $\{Y \leq y,Z \leq z\}$ in terms of the events $\{X_i \leq x_i\}$, which is not an easy task. (If you did figure out how to do it, I could easily make up more taxing examples.)
You might also try to define independence via factorization of joint density
functions, but I could invent further examples to make your life miserable, such as problems where the joint distribution of the random variables are not even given by densities. And if you could grind out the joint densities, probably by means of horrible calculations with Jacobians, you might end up with the mistaken impression that independence had something to do with the smoothness of the transformations.
The difficulty disappears in a measure theoretic treatment, as you will see in
Chapter 4. Facts about independence correspond to facts about product measures.
Random variables are especially useful when the sample space is large. Consider the following random experiment: I toss a coin five times. The sample space can be written as
$$
S=\{TTTTT,TTTTH,... , HHHHH\}.
$$
Note that here the sample space $S$ has $2^5=32$ elements. Suppose that in this experiment, we are interested in the number of times the coin flip results in 3 heads. We can list all the cases when this happens manually as a subset of the sample space $S$. However, this would be cumbersome. Instead, we could define a random variable $X$ whose value is the number of observed heads. The value of $X$ will be one of 0,1,2,3,4 or 5 depending on the outcome of the random experiment. Then, by computing $X$ for each element of $S$, we get the collection $\{0,1,\dots,5\}$ that contains more than one $3$. We then count the number of $3$'s that are in this collection.