My course notes list two reasons why cross-validation has a pessimistic bias. The first one is that the accuracy is measured for models that are trained on less data, which I understand. However, the second reason I don't understand. Supposedly, when we do cross validation and divide our data D into training sets D_i and test sets T_i, then the D_i and T_i are not independent (and even complementary) given D.
However, I don't see why this is different from the situation where we use a fixed testset: if we have a training set D and test set T, than T and D are also not independent given the union of D and T. In this case there is no bias, so I would expect there to be no bias for cross-validation either (apart from the fact that the model is trained on less data). Of course, since the different models that we train for cross validation use overlapping data, I would expect their accuracy to be correlated, which could lead to a higher variance, but I don't see how this could give a bias.