Does the following sampling process model any particular real life scenario ? If so, could you point me to some relevant scientific papers.
Consider an arbitrary probability distribution $\mathcal{D}$, and sampling $n$ times from it. Denote by $X_1, \dots, X_n$ the corresponding random variables, with the following dependency: Once having sampled $X_i$, it is not possible any more to get another sample "close" to it any more with respect to some distance $d$, or in other words,
$$\exists \delta > 0, \forall i \neq j, d(X_i - X_j) \geq \delta$$
EDIT 2017-04-13:
I think an example of such sampling procedure is to sample without repetition on a finite and discrete subset $\mathcal{D}$ of the real line. All the values that the $X_i$s will take will be different, and $\delta$ would be the minimum distance in between elements of $\mathcal{D}$.
$$\delta = \min_{x, y \in \mathcal{D}} d(x, y)$$
I am now looking at a way of generalizing this idea for example to continuous distributions or on infinite sets where the distance in between elements can tend to 0. Suppose the distribution is normal $\mathcal{N}(\mu = 5, \sigma = 1)$, and one draw comes out to be $x_1 = 6$, and I know from the problem that I am studying (the existence and an example of such a problem is the actual question I am asking) that subsequent samples cannot fall from within a distance $\delta = 0.1$ of $x_1$.
Effectively introducing the following dependency for $X_2$: $$P(X_2 \in [x_1 - \delta, x_1 + \delta] | X_1 = x_1) = 0$$