I sketch an impractical, theoretical comparison sort for sorting array $a$ of size $n$.
- Initialize a list of all $n!$ permutations of size $n$.
- For each possible pair of indices $i, j$, count how many permutations would get rejected if we were to find out that $a_i < a_j$ or $a_i \nless a_j$ and choose the minimum of these two counts. Finally, choose the pair of indices with the maximal minimum amount of rejections.
- For this pair of indices, actually perform the comparison, and remove those permutations from the list that are in conflict with this comparison.
- Stop if one permutation remains in the list, otherwise go to 2.
Assuming the input array is selected uniformly from all possible permutations, is this algorithm optimal in comparison count on average?
It's fairly easy to see that the algorithm acquires the most information possible in a single comparison at every comparison. But I have no clue whether this greedy strategy would result in optimal or suboptimal behavior for the entire task.
This question is a cross-post from CS.SE where it hasn't received an answer for 7+ years.