Edit (05/07/2023)
When answering this question, I realized the job actually can be done by only summoning Lemma 1 below (hence avoid touching the much more difficult Lemma 2), which will substantially reduce the machinery and calculations. The argument (largely exploiting the independence between $T(X)$ and $r$) may be how authors discovered the variance expression.
With the notations introduced in my old answer, the goal is to prove
\begin{align}
& E[T_iT_j] = 0, & 1 \leq i \neq j \leq n, \tag{i} \\
& E[T_i^2T_j^2] = \frac{1}{n(n + 2)}, & 1 \leq i \neq j \leq n, \tag{ii} \\
& E[T_i^2T_jT_k] = 0, & i, j, k \text{ distinct}, \tag{iii} \\
& E[T_iT_jT_kT_l] = 0, & i, j, k, l \text{ distinct}. \tag{iv}
\end{align}
We assume normality of $X$ from now on. To prove (i), write $X_iX_j = T_iT_j \times r^2$. Since $T_iT_j$ is independent of $r^2$ ($T_iT_j$ is a function of $T(X)$), $E[X_iX_j] = E[T_iT_jr^2] = E[T_iT_j]E[r^2]$. But then $E[X_iX_j] = E[X_i]E[X_j] = 0$ (by normality) and $E[r^2] > 0$ imply that $E[T_iT_j] = 0$. In the same manner, (iii) and (iv) hold.
Similarly, $X_i^2X_j^2 = T_i^2T_j^2 \times r^4$ and independence imply that $E[X_i^2X_j^2] = E[T_i^2T_j^2]E[r^4]$, whence
\begin{align}
E[T_i^2T_j^2] = \frac{E[X_i^2]E[X_j^2]}{E[r^4]} = \frac{1}{E[r^4]}.
\end{align}
So it suffices to determine $E[r^4]$, which is straightforward:
\begin{align}
E[r^4] &= E[(X_1^2 + \cdots + X_n^2)^2] = nE[X_1^4] + 2\binom{n}{2}E[X_1^2X_2^2] \\
&= 3n + n(n - 1) = n(n + 2).
\end{align}
This completes the proof of (ii).
Old Answer (04/05/2023)
One way to show this result is to use the following two lemmas (Lemma 1 and Lemma 2 are Theorem 1.5.6 and Exercise 1.32 in Aspects of Multivariate Statistical Theory by R. Muirhead respectively):
Lemma 1. If $X$ has an $m$-variate spherical distribution with $P(X = 0) = 0$ and $r = \|X\| = (X'X)^{1/2}, T(X) = \|X\|^{-1}X$, then $T(X)$ is
uniformly distributed on $S_m$ and $T(X)$ and $r$ are independent.
Lemma 2. Let $T$ be uniformly distributed on $S_m$ and partition $T$ as $T' = (\mathbf{T_1}' | \mathbf{T_2}')$, where $\mathbf{T_1}$ is $k \times 1$ and $\mathbf{T_2}$ is $(m - k) \times 1$. Then $\mathbf{T_1}$ has density function
\begin{align*}
f_{\mathbf{T_1}}(u) = \frac{\Gamma(m/2)}{\pi^{k/2}\Gamma[(m - k)/2]}(1 - u'u)^{(m - k)/2 - 1}, \quad 0 < u'u < 1. \tag{1}
\end{align*}
Here $S_m$ stands for the unit sphere in $\mathbb{R}^m$: $S_m = \{x \in \mathbb{R}^m: x'x = 1\}$. The proof to Lemma 1 can be found in the referenced text, and the proof to Lemma 2 can be found in this link (NOT EASY!).
With these preparations, now let's attack the problem. Denote $X = (a_1, \ldots, a_n) \sim N_n(0, I_{(n)})$, then by Lemma 1$^\dagger$, $r_k$ can be rewritten as
\begin{align*}
r_k = T_1T_{k + 1} + T_2T_{k + 2} + \cdots + T_{n - k}T_n,
\end{align*}
where $T := (T_1, \ldots, T_n)' = X/\|X\|$ has uniform distribution on $S_n$.
By Lemma 2, for $1 \leq i \neq j \leq n$, we have
\begin{align*}
& E(T_iT_j) = \int_{0 < t_i^2 + t_j^2 < 1}t_it_jf_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{2} \\
& E(T_i^2T_j^2) = \int_{0 < t_i^2 + t_j^2 < 1}t_i^2t_j^2f_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{3}
\end{align*}
where $f_{(T_i, T_j)}(t_i, t_j)$ is given by $(1)$ with $\mathbf{T_1} = (T_i, T_j)$. To evaluate $(2)$ and $(3)$, apply the polar transformation $t_i = r\cos\theta, t_j = r\sin\theta$, $0 < r < 1, 0 \leq \theta < 2\pi$. It then follows that
\begin{align}
E(T_iT_j) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^2\sin\theta\cos\theta(1 - r^2)^{n/2 - 2}rdrd\theta = 0. \tag{4}
\end{align}
This is because $\int_0^{2\pi}\sin\theta\cos\theta d\theta = 0$.
In addition, it follows by
\begin{align}
& \int_0^{2\pi}\sin^2\theta\cos^2\theta d\theta = \frac{1}{4}\pi, \\
& \int_0^1 r^5(1 - r^2)^{n/2 - 2}dr = \frac{1}{2}B\left(3, \frac{n}{2} - 1\right) =
\frac{1}{(\frac{n}{2} + 1) \times \frac{n}{2} \times (\frac{n}{2} - 1)}
\end{align}
that
\begin{align}
E(T_i^2T_j^2) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^4\sin^2\theta\cos^2\theta(1 - r^2)^{n/2 - 2}rdrd\theta = \frac{1}{n(n + 2)}. \tag{5}
\end{align}
To complete the evaluation of cross-product terms from $\operatorname{Var}(r_k)$, it remains to show $E[T_a^2T_bT_c] = 0$ for distinct $a, b, c \in \{1, \ldots, n\}$ and $E[T_aT_bT_cT_d] = 0$ for distinct $a, b, c, d \in \{1, \ldots, n\}$. These calculations are shown as follows.
To calculate $E[T_a^2T_bT_c]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c)$ yields
\begin{align*}
E(T_a^2T_bT_c) = \int_{0 < t_a^2 + t_b^2 + t_c^2 < 1}t_a^2t_bt_cf_{(T_a, T_b, T_c)}(t_a, t_b, t_c)dt_adt_bdt_c. \tag{6}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2),
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1 < \pi$, $0 \leq \theta_2 < 2\pi$, the integrand in $(6)$ that includes $\theta_1, \theta_2$ is (after multiplying the Jacobian determinant) $\cos^2(\theta_1)\sin^3(\theta_1)\cos(\theta_2)\sin(\theta_2)$, which integrates to $0$ over $[0, \pi) \times [0, 2\pi)$. Hence $E[T_a^2T_bT_c] = 0$.
To calculate $E[T_aT_bT_cT_d]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c, T_d)$ yields
\begin{align*}
E(T_aT_bT_cT_d) = \int_{0 < t_a^2 + t_b^2 + t_c^2 + t_d^2 < 1}t_at_bt_ct_df_{(T_a, T_b, T_c, T_d)}(t_a, t_b, t_c, t_d)dt_adt_bdt_cdt_d. \tag{7}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2)\cos(\theta_3), \\
& t_d = r\sin(\theta_1)\sin(\theta_2)\sin(\theta_3), \\
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1, \theta_2 < \pi$, $0 \leq \theta_3 < 2\pi$, the integrand in $(7)$ that includes $\theta_1, \theta_2, \theta_3$ is (after multiplying the Jacobian determinant) $\cos(\theta_1)\sin^5(\theta_1)\cos(\theta_2)\sin^3(\theta_2)\cos(\theta_3)
\sin(\theta_3)$, which integrates to $0$ over $[0, \pi) \times [0, \pi) \times [0, 2\pi)$. Hence $E[T_aT_bT_cT_d] = 0$.
To summarize all these pieces, we conclude that $E[r_k] = 0$ and
\begin{align}
& \operatorname{Var}(r_k) = E[r_k^2] \\
=& E(T_1^2T_{k + 1}^2) + \cdots + E(T_{n - k}^2T_n^2) + \sum E[T_a^2T_bT_c] + \sum E[T_aT_bT_cT_d] \\
=& (n - k) \times \frac{1}{n(n + 2)} = \frac{n - k}{n(n + 2)}.
\end{align}
This completes the proof. As a by-product, that both $(6)$ and $(7)$ are identical to $0$ also readily (now it is truly "readily") imply that $r_k$ and $r_l$ are uncorrelated when $k \neq l$, which is another proposition claimed by the original paper.
$^\dagger$: The condition of Lemma 1 implies that the main result still holds when the distribution assumption of innovations $(a_1, \ldots, a_n)$ is slightly generalized to spherical distributions (from Gaussian).