Background
I'll confess, I like Richard McElreath's lectures so much that I've been watching his backlog of lectures (even though I've already seen all of the recent lectures of the same course).
In Statistical Rethinking - Lecture 15 of Statistical Rethinking 2015, he says:
Ranks I'm not going to have time to talk about. Rank data is terrible. Absolutely terrible. So there are two things to say about ranks and then I'll never mention them again.
The first is never collect rank data, if you can avoid it. You don't want your primary data to be ranked. What's the problem with ranks? You have to take the whole vector of ranks simultaneously because they're exclusive. Right so if somebody is number one, none of the other things can be number one. No longer can you treat the cases separately. So you've got a bunch of individuals and you had somebody rank them on some scale; man, now you've got to predict the whole vector of ranks simultaneously out of your model. There are model types that do this, but you don't want to go down that road. At least not without me. So the best thing is not to collect rank data.
The second thing is don't transform data that's not ranked into ranks. And there's a tradition of telling people to do this for some reason. And I would like to discourage you from doing that.
So if you find yourself in a situation like this come to me and there are alternatives. If you must deal with rank data there are ways to deal with it, it's annoying but it's doable. And definitely don't transform things into ranks.
Side notes:
- His claim that ranks are exclusive is not entirely right or complete as it depends on the type of ranking method and the data for whether ranks are exclusive.
- In later years he espouses trace rank plots, so I don't think he believes ranks are through-and-through bad.
Question
Richard mentions that there are (presumably Bayesian) methods involving predicting an entire random vector of ranks.
To avoid being too general, let's aim to construct an example.
$$\vec X \sim \text{Exponential} \left( \vec \lambda \right)$$
$$\text{ranking} \left( \vec X \right) = \begin{bmatrix} \text{rank}(X_1) \\ \vdots \\ \text{rank}(X_n) \end{bmatrix} \sim \text{Unknown}$$
If we didn't care about the order of the components we could simply take the distribution of ranks $\{1, \ldots, n \}$ to be almost-surely uniform since the exponential distribution is continuous. But here we are concerned with the whole random vector, and some ranking vectors will be more or less common depending on $\vec \lambda$.
I started thinking about the change of variables but decided to make a meme instead.

But seriously, if I define
$$\text{rank}(X_i) \triangleq 1 + \sum_{\substack{1 \leq j \leq n \\ j \neq i}} \mathbb{I}[X_j \leq X_i]$$
then there would be something similar to a derivative or Laplacian of each of the indicators. I'm assuming such an operator would be a derivation that would therefore distribute across the sum. From these distributional derivatives I am imagining that "distributional Jacobian" could be used to work out a change in variables.
But that's just my guesswork. How would I actually work this probability model out?