There is an interesting exercise which shows k-fold cross validation fails in some cases. The exercise is as follows:
Consider a case in that the label is chosen at random according to $P[y =1]= P[y =0]=1/2$. Consider a learning algorithm that outputs the constant predictor $h(x)=1$ if the parity of the labels on the training set is 1 and otherwise the algorithm outputs the constant predictor $h(x)=0$. Prove that the difference between the leave-one-out estimate and the true error in such a case is always 1/2.
My question is in what cases (or under what conditions) k-fold cross validation fails? Is it only k-fold validation that fails or other types of cross validation may fail too?