As I've noted in a related answer, my view is that it is best to reserve the attribution of "spuriousness" to an incorrect inference from correlation to cause. It is immportant to be able to talk accurately about evidence of correlation (and other nonlinear associations) between variables in statistical analysis, and this often leads to cases where there is clear evidence of correlation, or some other statistical association beteween variables. Merely asserting this relationship to be present, when there is evidence that it is indeed present, is certainly not "spurious". Thus, it is not appropriate to refer to inferences of statistical associations as "spurious" in their own right. What is "spurious" is when a person takes evidence of correlation and then uses this to make an inference of a direct causal link between variables, in circumstances where that step is not warranted. For that reason, I find the term "spurious correlation" to be harmful to discussion, since it actually refers to a spurious inference from correlation, which actually does exist, to a cause which does not.
The items in your list: None of these situations strike me as inherently "spurious", though they could be accompanied by incorrect inferences in some cases. Items 1-2 of your list merely represent cases where there is sampling error, such that an estimate of a relationship or quantity in a smaller sample is not an accurate reflection of the true relationship or quantity in the larger group from which that sample is drawn. Since statistical methods have appropriate measures of the likely levels of sampling error, there is no need for anything further here. So long as inferences are being made using proper estimators, and appropriate measures of uncertainty are constructed that take account of the sampling error (e.g., using confidence intervals, Bayesian posterior intervals, etc.) nothing "spurious" is occurring. In my view, it is not a good idea to conflate sampling error with a spurious inference.
Item 3 refers to an actual relationship that is a statistical association, but is merely "uninteresting" because it does not reflect a causal connection between the associated variables. Again, there is nothing inherently "spurious" about recognising the existence of this statistical association, but if a person were to infer a causal link between ice-cream sales and drownings, that would indeed be a spurious inference.
Item 4 appears to me to be impossible. If you trace causality back to its philosophical roots, ultimately it is just an attribution to an object of certain kinds of actions that it takes. (Causality is merely "identity applied to action" ---i.e., a thing acts according to its nature.) Thus, any process that generates "data" is taking action, and that action can, in principle, be traced back to the nature of the process and its constituent objects. (Note that we speak metaphysically here, not epistemologically; there may be reasons why we cannot uncover the causal chain.)
Which of these items to explain to students: As I see it, there are essentially three principles that come out of your four items, all of which are valuable for an understanding of the interplay between causality and statistical association. Firstly, there is the philosophical question of what causality is at a metaphysical level. Secondly, there is the question of when causality can properly be inferred from statistical association (and when it cannot). And third, there is the question of how we find evidence of statistical association, and how accurate our inference of statistical association are. Each of these issues is of value when teaching statistics, but the first gets you deeper into the territory of philosophy. If you would like your student to develop their skills as experimentalists then they should take some time to confront each of these questions and build up an integrated theory of statistical association and causality.
At a minimum, I would expect students who do some statistical courses to come out with a reasonable understanding of methods to estimate statistical associations, and the likely level of sampling error, and I would expect them to understand the injunction that "correlation is not cause". Over time they should develop a deeper understanding of causal structures and their statistical implications, and ultimately they should develop the ability to plan and understand experimental structures that are designed to allow a transition from inference of association to inference of causality. It is certainly desirable if your students can back this up with a reasonably coherent philosophical explanation of causality, but that is quite rare, and it is excusable for that to be left out of a statistics course. (Interested students can be directed to the philosophy department for courses on that subject.)