I am new to this topic but I think I can give you a concrete and elementary example of how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of integrated mean square error ($\mathrm{IMSE}$ for short) in the context of kernel density estimation.
Recall a typical univariate kernel density estimator $\widehat{f}$:
$$
\widehat{f}(x)=\frac{1}{n h} \sum_{i=1}^n K\left(\frac{X_i-x}{h}\right),
$$
where we have imposed the following assumptions:
- A.1 $X_1, \cdots, X_n \stackrel{i . i . d .}{\sim} f$.
- A.2 $f^{\prime\prime}(x)$ is continuous and bounded in the neighborhood of $x$.
- A.3 The kernel function $K(\cdot)$ is a symmetric pdf which maximized at $0$ and satisfies:
$$
(i)\begin{aligned}\int K(u) d u=1\end{aligned},\quad
(ii)\begin{aligned}\nu_2:=\int K^2(u) d u<\infty\end{aligned},\quad
(iii)\begin{aligned}\kappa_2:=\int u^2 K(u) d u \in(0, \infty)\end{aligned}
$$
- A.4 $h \rightarrow 0$, $n h \rightarrow \infty$ as $n \rightarrow \infty$, which means $n^{-1}=o\left((n h)^{-1}\right)$.
Under the above assumptions, the integrated mean square error can be computed as:
$$
\begin{aligned}
\mathrm{IMSE}\left(\widehat{f}\right) =& \int \operatorname{MSE} \widehat{f}(x) d x\\
=& \frac{1}{n h}\int K(u)^2 d u+\frac{h^4}{4}\left(\int u^2 K(u) d u\right)^2 \int\left[f^{\prime\prime}(x)\right]^2 d x+o\left(h^4+\frac{1}{n h}\right),
\end{aligned}
$$
which can be easily derived from the bias rendered by KDE (the derivation can be referred to in this question).
Accompanied with this knowledge, let us see how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of $\mathrm{IMSE}\left(\widehat{f}\right)$.
Let us define the objective function of least square cross-validation as follows:
$$
\begin{aligned}
LSCV&=\int[\widehat{f}(x)-f(x)]^2 d x \\
&=\underbrace{\int[\widehat{f}(x)]^2 d x}_{=:\mathcal{I}_{1n}}-2 \underbrace{\int \widehat{f}(x) f(x) d x}_{=:\mathcal{I}_{2n}}+\int f^2(x) d x \\
&=\mathcal{I}_{1n} - 2\mathcal{I}_{2n}+\int f^2(x) d x\\
&={\mathcal{I}}_{1n} - 2\widehat{\mathcal{I}}_{2n}+\int f^2(x) d x + o_{\mathbb{P}}(1),\\
\end{aligned}
$$
where $\widehat{\mathcal{I}}_{2n}$ denotes the estimand of ${\mathcal{I}}_{2n}$.
The leave-one-out method is applied to the computation of $\widehat{\mathcal{I}}_{2n}$. To ease of exposition, let me define two versions of
$\mathcal{I}_{2n}$:
$$
\begin{aligned}
\widehat{\mathcal{I}_{2n}}^{\text{LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}_{-i}\left(X_i\right) =\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n K\left(\frac{X_j-X_i}{h}\right),\\
\widehat{\mathcal{I}_{2n}}^{\text{without LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}\left(X_i\right)=\frac{1}{n^{2}h} \sum_{i=1}^n\sum_{j=1}^n K\left(\frac{X_j-X_i}{h}\right).
\end{aligned}
$$
Notice that their only difference lies in:
$$
\widehat{\mathcal{I}_{2n}}^{\text{without LOO}} = \widehat{\mathcal{I}_{2n}}^{\text{LOO}} + \frac{1}{n h} K(0).
$$
But as we will see later, this difference plays a key role in the conclusion that without leave-one-out, the objective function $LSCV$ is biased concerning $\mathrm{IMSE}$ and is minimized at $h=0$, which violates the condition of $n h \rightarrow \infty$ as $n \rightarrow \infty$.
Now, taking the expectation $\mathbb{E}_{X}(\cdot)$ to $LSCV^{\text{LOO}}$ gives:
$$
\mathbb{E}_{X}\left(LSCV^{\text{LOO}}\right) = \mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]-2 \mathbb{E}_{X}\left[\widehat{\mathcal{I}}_{2 n}^{\text{LOO}}\right]+\int f^2(x) d x.
$$
The first term is computed as:
$$
\begin{aligned}
\mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]=& \mathbb{E}_{X}\left[\int[\widehat{f}(x)]^2 d x\right] = \int\mathbb{E}_{X}\left[[\widehat{f}(x)]^2\right] d x\\
=& \int\mathrm{Var}\widehat{f}(x) d x + \int\mathbb{E}_{X}^2\left[\widehat{f}(x)\right] d x \\
=&\int \frac{f(x)}{n h} \nu_2+o\left(\frac{1}{n h}\right) + \left(f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2 + o\left(h^2\right)\right)^{2} dx \\
=& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right)\\
& + \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx ,
\end{aligned}
$$
where the third line can be referred (again) to the link and
$$
\nu_2:=\int K^2(u) d u.
$$
The second term is computed as:
$$
\begin{aligned}
2 \mathbb{E}_X\left[\widehat{\mathcal{I}}_{2 n}^{\mathrm{LOO}}\right] =& \mathbb{E}_{X}\left[\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n2 K\left(\frac{X_j-X_i}{h}\right)\right] = \mathbb{E}_{X}\left[\frac{2(n-1)}{n^{2}} \sum_{i=1}^n \widehat{f}_{-i}(X_{i})\right] \\
=& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[\mathbb{E}_{X_{-i}}\left[\widehat{f}_{-i}(X_{i})\right]\right] \\
=& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right] \\
=& \frac{2(n-1)}{n} \int\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right]f(X_{i}) d X_{i}\\
=& \frac{2(n-1)}{n} \int\left[f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2+o\left(h^2\right)\right]f(x) d x\\
=& \frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right) \\
\end{aligned}
$$
Inserting the two expressions back $\mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]$ gives:
$$
\begin{aligned}
& \mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\\
=& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) \\
&+ \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx \\
&-\left[\frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right)\right]+\int f^2(x) d x\\
=& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) + O\left(\frac{1}{n}\right),
\end{aligned}
$$
Now, we can see that
$$
\mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\sim \operatorname{IMSE}(\widehat{f}).
$$
And it is quite evident to see that
$$
\mathbb{E}_{X}\left[LSC V^{\text {without } L O O}(h)\right]=\mathbb{E}_{X}\left[LSC V^{L O O}(h)\right]-\frac{2}{n h} K(0).
$$
To this end, one can be informed in the context of KDE that the leave-one-out cross-validation provides a relatively “unbiased estimate of the true generalization performance”.