It's probably easier here to write out the structural model implied by the DAGs and look at how the corresponding functions behave when intervened on. I'll do this briefly for confounders and colliders. Admittedly, some of this is a little new to me too, so apologies if this is a little sloppy. The best resource to learn more is probably Judea Pearl's Causality. I hope that this will help answer your question anyway!
Confounders
Say we have a simple DAG depicting variables $X$ and $Y$ confounded by a third variable $W$, where $X$ does not cause $Y$, or vice versa. This would look like $X \leftarrow W \rightarrow Y$. What this implies is that $X$ is generated according to $W$ and some exogenous error, $Y$ is generated according to $W$ and some exogenous error, and $W$ is generated only according to an exogenous error. The error terms of each variable are independent from the error terms of the other variables. For example, error in $X$ causes stochasticity in $X$, but is fully independent of $Y$ and $W$. We can write the corresponding (unknown) functions for our DAG as
$w = f_1(u_1)$
$x = f_2(w, u_2)$
$y = f_3(w, u_3)$
which simply states that each realization $w$ depends on no other variable (only some stochastic error), while the corresponding values for $x$ and $y$ depend on that particular $w$ in addition to the exogenous error. You could intervene on this causal model by setting $W$ to different values. This would change the input for the functions $f_2(w, u_2)$ and $f_3(w, u_3)$ each time you try a new $w$. In other words, changing $w$ also changes $x$ and $y$, creating an association between them. We have an open path.
What if we condition on $W$ now to close this open path? Maybe we simply set $W = 1$. Our functions for $x$ and $y$ become
$x = f_2(1, u_2)$
$y = f_3(1, u_3)$
$X$ and $Y$ can now only change through their own errors $U_2$ and $U_3$. Since these are assumed to be independent, $X$ and $Y$ are independent and we have closed the open path.
Colliders
If $W$ was a collider instead, our DAG would change to $X \rightarrow W \leftarrow Y$ and the functions of our structural model would be
$w = f_4(u_4, x, y)$
$x = f_5(u_5)$
$y = f_6(u_6)$
We now have the same situation that we had in the confounder example after conditioning on $W$: since $X$ and $Y$ only depend on independent errors, they are independent. There is no open path.
Explaning what happens when $W$ is conditioned on is a bit less intuitive. For some intuition, assume that $X$, $Y$, and $W$ are binary variables. Also suppose we knew that the function generating each realization of $W$ was $w = I(x = 1 \lor y = 1)$. In other words, $w$ takes the value $1$ if and only if either $x$ takes the value $1$, $y$ takes the value $1$, or both take the value $1$. A possible example for such a data-generating process is one where everyone older than 50 years ($x = 1$) or female ($y = 1$) is invited to a cancer screening ($w = 1$).
What if we condition on $W$ now? We cause selection bias! For example, if we set as condition that $w$ equals $1$, we make any observation inadmissible that does not have $x = 1$ or $y = 1$ because no such observation can cause $w$ to be $1$. This is strictly not possible given our structural model. Alternatively, if we look only among those that were invited to a cancer screening, no individuals that are not female or older than 50 will be observed. There is an association between $X$ and $Y$ now; we have opened a path. This might be a little easier to see the other way around: If we condition on $W = 0$, we know that $X = 0$ and $Y = 0$ and the two variables $X$ and $Y$ are perfectly correlated.