Given a data table containing a very large number $N$ of rows, with each row containing a large number $k$ of fields, with each field containing a large but fixed number of bits, there are a number of methods for constructing an "index" structure so that the following operations can be performed on the table and index in $O(k\log N)$ time (relative to $N$ and $k$):
Insert a new element into the table.
Remove a specified element from the table.
Given a set of values of the fields, obtain the first record that is in the table with field values greater than the given field values, when the table is sorted into the lexicographic order with field 1 first, field 2 second, etc.
We wish to generalize this construction so that when operation 3 is done, any of the $k!$ orderings of the $k$ fields can be specified to determine the ordering of the records in the table.
Clearly, this can be done by constructing k! indexes, one for each ordering of the fields. Then the operations take $O(k!k\log N)$ time.
We want an algorithm whose operations are much faster (relative to $k$), preferably $O(k\log k \log N)$ time.
Does such an algorithm/data structure exist? It seems likely that if it existed, someone would have published and implemented it by now, but I've found no signs that one exists. Conversely, perhaps it can be proven that no such algorithm exists. But I've found no signs that such a proof exists.