The internally studentized residuals do have exactly unit variance.
Consider a linear regression model $\boldsymbol y=X\boldsymbol\beta+\boldsymbol\varepsilon$, where $\boldsymbol y$ is an $n\times 1$ response vector, $X$ is an $n\times p$ matrix of covariates (fixed), $\boldsymbol \beta$ is a $p\times 1$ vector of parameters, and the error vector $\boldsymbol\varepsilon$ is multivariate normal $N_n(\boldsymbol 0,\sigma^2I)$.
The $i$th internally studentized residual is
$$r_i=\frac{e_i}{\hat\sigma\sqrt{1-h_{ii}}}\,,$$
where $e_i=y_i-\boldsymbol x_i^T\hat{\boldsymbol\beta}$ is the $i$th residual, $h_{ij}$ is the $(i,j)$th entry of the hat matrix $H=X(X^TX)^{-1}X^T$, and $\hat\sigma^2=\frac1{n-p}\sum_{j=1}^n e_j^2$ is the usual unbiased estimator of $\sigma^2$. Also, $\boldsymbol x_i^T$ is the $i$th row of $X$ and $\hat{\boldsymbol\beta}$ is the least squares estimate of $\boldsymbol\beta$.
Note that $e_i \sim N(0,\sigma^2(1-h_{ii}))$ for each $i$.
The mean of each $r_i$ is $0$ because $e_i/\hat\sigma$ is symmetric about $0$. So the variance is just the second moment, which one can find using the distribution of $r_i^2$:
$$\frac{r_i^2}{n-p}\sim \text{Beta}\left(\frac12,\frac{n-p-1}{2}\right) \tag{1}$$
So, $$\operatorname{Var}(r_i)=(n-p)\operatorname E\left[\frac{r_i^2}{n-p}\right]=\frac{(n-p)/2}{1/2+(n-p-1)/2}=1$$
For a simple derivation of $(1)$, we can use the relationship between $\hat\sigma^2$ and $s_{(i)}^2=\frac1{n-p-1}\sum\limits_{j(\ne i)=1}^n \left(y_j-\boldsymbol x_j^T\hat{\boldsymbol\beta}_{(i)}\right)^2$, where $\hat{\boldsymbol\beta}_{(i)}$ is the least squares estimate of $\boldsymbol\beta$ with the $i$th case removed.
First we need the following formula for $\operatorname{DFBETA}_i$:
$$\operatorname{DFBETA}_i := \hat{\boldsymbol\beta}-\hat{\boldsymbol\beta}_{(i)} = \frac{(X^TX)^{-1}\boldsymbol x_i e_i}{1-h_{ii}} \tag{2}$$
Then,
\begin{align}
(n-p-1)s_{(i)}^2 &= \sum_{j(\ne i)=1}^n \left[(y_j-\boldsymbol x_j^T\hat{\boldsymbol\beta})+\boldsymbol x_j^T(\hat{\boldsymbol\beta}-\hat{\boldsymbol\beta}_{(i)})\right]^2
\\&=\sum_{j(\ne i)=1}^n \left[e_j+\frac{h_{ji}e_i}{1-h_{ii}}\right]^2
\\&=\sum_{j=1}^n \left[e_j+\frac{h_{ij}e_i}{1-h_{ii}}\right]^2 - \left[e_i+\frac{h_{ii}e_i}{1-h_{ii}}\right]^2
\\&=\sum_{j=1}^n e_j^2 + \frac{e_i^2}{(1-h_{ii})^2}h_{ii} - \frac{e_i^2}{(1-h_{ii})^2}
\\&=(n-p)\hat\sigma^2 - \frac{e_i^2}{1-h_{ii}}
\end{align}
In the penultimate step, we have used $h_{ii}=\sum_{j=1}^n h_{ij}^2$ and $\sum_{j=1}^n h_{ij}e_j=0$, which follow from $H=H^2$ and $H\boldsymbol e=\boldsymbol 0$ respectively.
Now $(\star)$ follows from
\begin{align}
\frac{r_i^2}{n-p}&=\frac{e_i^2/(1-h_{ii})}{(n-p)\hat\sigma^2}
\\&=\frac{\frac{e_i^2}{\sigma^2(1-h_{ii})}}{\frac{(n-p-1)s_{(i)}^2}{\sigma^2}+\frac{e_i^2}{\sigma^2(1-h_{ii})}}
\\&=\frac{U}{U+V}\,,
\end{align}
where $U=\frac{e_i^2}{\sigma^2(1-h_{ii})}\sim \chi^2_1$ and $V=\frac{(n-p-1)s_{(i)}^2}{\sigma^2}\sim \chi^2_{n-p-1}$ are independently distributed.
For proving $(2)$, we first define $X_{(i)}$ and $\boldsymbol y_{(i)}$ as the $X$ matrix and $\boldsymbol y$ vector without their $i$th rows.
Then $$\hat{\boldsymbol\beta}_{(i)} =(X_{(i)}^TX_{(i)})^{-1}X_{(i)}^T\boldsymbol y_{(i)}$$
Now using
$$X^TX=X_{(i)}^TX_{(i)}+\boldsymbol x_i\boldsymbol x_i^T$$
and $$X^T\boldsymbol y=X_{(i)}^T\boldsymbol y_{(i)}+\boldsymbol x_i y_i$$
combined with the Sherman-Morrison formula leads to
$$\hat{\boldsymbol\beta}_{(i)}=\hat{\boldsymbol\beta}-\frac{(X^TX)^{-1}\boldsymbol x_i e_i}{1-h_{ii}}$$