40

Are there any known comparison sorting algorithms that do not reduce to sorting networks, such that each element is compared $O(\log n)$ times?

As far as I know, the only way to sort with $O(\log n)$ comparison on each element is to construct an AKS sorting network for $n$ inputs, and run the input on the sorting network.

AKS is not easy to implement and has an impractical constant factor, so there are motivations to search for other algorithms.

An algorithm with $O(\log^2 n)$ comparisons per item which does not seem to imply a sorting network is presented here. (iirc, this was first presented by Rob Johnson at Stony Brook's algorithm seminar).

Jukka Suomela
  • 11,500
  • 2
  • 53
  • 116
Chao Xu
  • 4,399
  • 23
  • 32
  • Updated, I should have said I'm looking for algorithms that doesn't reduce to a sorting network. – Chao Xu Jun 26 '11 at 23:25
  • 2
    I don't understand the question: many sequential algorithms seems to correspond to your request. e.g. Merge sort is a classical sorting algorithm, and does not make more than $\log n$ comparison per element. Maybe you are asking about parallel sorting algorithms? – J..y B..y Jun 27 '11 at 11:46
  • 5
    @Jeremy: If you merge two lists, $(a_1, ..., a_n)$ and $(b_1, ..., b_n)$, you may end up comparing $a_1$ against each of $b_1, ..., b_n$, that is, $\Omega(n)$ comparisons per one element. And this was just one "merge" step. Of course the average number of comparisons is necessarily small, but the question is about the worst-case complexity. – Jukka Suomela Jun 27 '11 at 13:06
  • I wonder what is the connection between this question and sorting networks. Of course a sorting network is a solution to the problem, but what about the converse case? Could we have a solution to this problem without implicitly designing a good sorting network? – Jukka Suomela Jul 04 '11 at 16:52
  • 6
    I believe that's possible. Sorting networks are data oblivious and have predetermined way of comparisons, but a sorting algorithm might be able to choose between different set of operations depend on the data. One can modify merge sort into a algorithm with $O(\log^2 n)$ comparison for each element, and doesn't seem to imply a sorting network http://www.reddit.com/comments/9jqsi/how_to_merge_sorted_lists_with_olog_n_comparisons/ – Chao Xu Jul 04 '11 at 18:11
  • 1
    Jukka: Thanks, I get your point. But that's just when using naive merging: one can merge $(a_1,\ldots,a_n)$ with $(b_1,\ldots,b_n)$ using doubling search to place each element, which is still $n$ comparisons in total in the worst case, but $\lg n$ comparisons per element at maximum, which yields the version of merge sort alluded to by Chao. – J..y B..y Jul 04 '11 at 20:15
  • 1
    @Chao: That is a beautiful algorithm! I think that you should add the statement of the result (an algorithm with O(log^2) comparisons per item which does not seem to imply a sorting network) and the link to the question. – Tsuyoshi Ito Jul 05 '11 at 00:46
  • I think that these algorithms are hard to obtain. For a slightly related problem, see proposition 1 on page 10 here: http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.0562v1.pdf – domotorp Jul 05 '11 at 06:21
  • is there a precise way to define "does not reduce to a sorting network"? – Sasho Nikolov Jul 05 '11 at 15:51
  • @Sasho: I doubt there is. But the algorithm with O(log^2) comparisons per item described in the linked page gives you an idea of what it intuitively means. – Tsuyoshi Ito Jul 05 '11 at 22:19
  • Is randomized algorithm acceptable? My randomized PRAM algorithms are rusty, but I remember there is a relatively simple randomized algorithm achieving O(log n) on a parallel computer. In any case, it is not hard to come up with an algorithm that would compare every element $O(\log n)$ times with high probability. I can give details if there is interest. – Sariel Har-Peled Jul 05 '11 at 23:34
  • I wonder what would happen if one tried to use the following strategy: Maintain a counter with each element, keeping track of the number of comparisons. Then take a reasonable sorting algorithm, e.g., merge sort; run it as usual but if at some point you encounter an element that has already been compared too many times ($C \log n$ for a large constant $C$), then move it to the "waiting room". In the end, the list is sorted, except that we have some elements in the waiting room that still need to be sorted & merged. Recursively sort the waiting room & merge. What would be a worst-case input? – Jukka Suomela Jul 06 '11 at 00:12
  • @Sariel: I'm only interested in deterministic algorithm(like you said, $O(log n)$ with high probability is not hard to come up with). – Chao Xu Jul 06 '11 at 00:48
  • 2
    Now there is a new related (but hopefully much easier) question: http://cstheory.stackexchange.com/questions/8073/merging-lists-of-fragile-objects – Jukka Suomela Sep 03 '11 at 22:03

1 Answers1

17

Upon discussing this with Michael T. Goodrich, it seems that the parallel sorting algorithm by Cole for EREW PRAM does the job. See

In that algorithm there are $O(\log n)$ rounds and in each round each element participates in $O(1)$ comparisons. (One has to understand the algorithm to see that we do not abuse of making copies of each element.)

An extension of that algorithm for parallel pointer machine is given in

Kaveh
  • 21,577
  • 8
  • 82
  • 183
someone
  • 1,414
  • 12
  • 19
  • We would like to know who you are! :D – Tayfun Pay Nov 03 '12 at 19:58
  • @someone is Sergio Cabello – someone Nov 20 '12 at 16:11
  • @someone. Ty for this answer. It mentions the results in EREW model. Since its Exclusive read and exclusive write model, it means that all the locations are touched at most O(1) time in each round and thus $O(\log n)$ times on the whole.

    Does this imply a straightforward algorithm for finding median in RAM (or even EREW model) using $O(\log n)$ comparisons per element?

    – Vk1 Oct 05 '19 at 17:16