The nice (and dangerous) thing about "gaining intuition" is that one does not have to be mathematically strict or even "correct" (that was a warning for what is to follow, in case it didn't register as such).
Part I.
We are looking at two series of numbers $\{y,x\}$, we have a computational algorithm that is called ``ordinary least squares", and using this method we obtain an estimate of the one based on the other and vice versa:
$$\hat{y} = \hat{\beta}_1 x + \hat{\beta}_0,\qquad \hat{x} = \hat{\alpha_1} y + \hat{\alpha_0}.$$
We also define the ``coefficient of determination", $\equiv R^2$, as
$$R^2_{y|x} = \frac{n^{-1}\sum(\hat y-\bar {\hat y}))^2}{n^{-1}\sum(y - \bar y)^2}.$$
The bar denotes the arithmetic average. In statistical terminology, this is a ratio of "sample variances". The variance is verbally described as the ``average squared deviation from the mean" (the arithmetic average in our case), "average squared variation around the mean"...
How could we describe the ``variance" without using statistical terminology?
The variance is a measure of how much we do NOT behave as a constant, the constant being our arithmetic average.
And the defining property of a constant is that it doesn't change. So we managed to connect the concept of "variance/variation around" with the concept of "change in relation to".
A mathematical concept and symbol related to change is the differential. And here we would want to think of the average differential. Conjuring the symbol $d_A$ to represent this concept of "average differential" we can then write the quotient
$$R^2_{y|x} = \frac{[d_A(\hat y - \bar {\hat y})]^2 }{[d_A(y - \bar y)]^2} = \frac{[d_A(\hat y)]^2 }{[d_A(y)]^2}.$$
The reason for the simplification is that the arithmetic average is a constant so it has zero average differential.
We can write the same for the other relation,
$$R^2_{x|y} = \frac{[d_A(\hat x)]^2 }{[d_A(x)]^2}.$$
A first hurdle to surpass intuitively is the known fact that $R^2_{y|x} = R^2_{x|y}$, i.e. that the two coefficients of determination are the same. But that intuition is not my goal here. So we have
$$R^2_{y|x} = R^2_{x|y} = R^2_{y,x}= \implies \frac{[d_A(\hat y)]^2 }{[d_A(y)]^2} = \frac{[d_A(\hat x)]^2 }{[d_A(x)]^2} \implies \frac{d_A(\hat y) }{d_A(y)} = \frac{d_A(\hat x) }{d_A(x)}.$$
But this means that we can write
$$R^2_{y,x} = \frac{d_A(\hat y) }{d_A(y)} \cdot \frac{d_A(\hat x) }{d_A(x)}.$$
Part II.
When we want to talk and write about the estimated coefficients $\hat \beta_1$ and $\hat \alpha_1$ in a fancy way, we say "the derivative of" and we often write
$$\frac {d\hat y}{d x} = \hat \beta_1,\qquad \frac {d\hat x}{d y} = \hat \alpha_1.$$
But in reality, we know that $\hat \beta_1$ is some average measure of "change in $\hat y$ as $x$ changes" (and likewise for $\hat \alpha_1$). So perhaps we should use our symbol $d_A$ and write
$$\frac {d_A(\hat y)}{d_A (x)} = \hat \beta_1,\qquad \frac {d_A(\hat x)}{d_A(y)} = \hat \alpha_1.$$
$$\implies \hat \beta_1 \cdot \hat \alpha_1 = \qquad \frac {d_A(\hat y)}{d_A (x)} \cdot \frac {d_A(\hat x)}{d _A(y)}.$$
Now, we just moved away from "infinitesimal changes", which faciliates even more the commitment of Leibniz's Cardinal Sin: treat these "average derivatives" as quotients of differentials, which allows you to swap the denominators and arrive at
$$\hat \beta_1 \cdot \hat \alpha_1 = \qquad \frac {d_A(\hat y)}{d_A (y)} \cdot \frac {d_A(\hat x)}{d_A (x)}.$$
The conclusion of Part II is identical to the conclusion of Part I.
Did I provide any kind of intuition here? That's for others to say. I just note that these conclusions can now be described in terms of "ratios of average changes" without using more technical mathematical or statistical terminology, like variance, standard deviation, correlation, etc.