(TL;DR version below) If my understanding is correct, bias/variance are measures of goodness of fit of a statistical estimator w.r.t. the sampling distribution. So if I have a statistic $t(X)$ that estimates a given population parameter $\theta$ with high variance, it means that for every sample $x$ drawn from the population, the estimate $t(x)$ will have high variance w.r.t. the true parameter $\theta$ in the sampling distribution.
Now I'm trying to think of how this applies to cross-validation techniques such as:
- Holdout
- Leave one out Cross Validation (LOOCV)
- K-fold Cross Validation (kCV)
My understanding is that these techniques are also meta-estimators where the population is the training set and the parameter to be estimated is the generalization error on the entire unseen portion of the data. What I do not understand, however, is how bias/variance of these estimators are measured. I guess for Holdout and kCV, the sampling distribution is going to comprise the different ways in which you can divide the dataset into partition(s). But what about LOOCV? The partitioning appears to be deterministic and yet many textbooks seem to suggest that this method exhibits high variance. I know that there are other answers that tackle this question (Bias and variance in leave-one-out vs K-fold cross validation), but I am trying to understand cross-validation as statistical estimators from a theoretical perspective.
TL;DR: What does the sampling distribution of cross-validation methods (especially LOOCV) look like and how can bias/variance be calculated?