What Does ⍋⍋ Even Mean?

Posted on August 4, 2023

A while back I was casually perusing the articles on Jo̊t Dọt Ti̽mes and fell in love with almost everything that Paul Mansour writes up. In his Grade Down of Grade Up, he shares a quote that just feels like a challenge:

So the conclusion is that if we are only dealing in permutations ⍒⍋ is worthless since it’s simply a slower ⌽, and if we are not dealing in permutations even Adám doesn’t know what ⍒⍋ does so it is probably equally worthless.

Mansour’s answer is a really nice special application, but it turns out that there’s actually a satisfying general answer! We will consider general arrays, but in the back of our mind let’s keep a concrete example in mind, something like this:

       ⊢Y←⎕UCS 97+?10⍴26
       ↑Y (⍋Y)
 l n p w b k m c x t
 4 7 5 0 6 1 2 9 3 8

Defining a couple terms up front will make our life much easier:

Applying these to the definition of Grade Up, we can say that the \(i^{\rm th}\) element of ⍋Y simply indexes whichever item of \(Y\) has rank \(i\). With a little squinting, we also see that this is equivalent to the reference definition:

R is an integer vector being the permutation of ⍳1↑⍴Y that places the sub-arrays along the first axis in ascending order.

Let us add a little bit of notation. When the \(i^{\rm th}\) item of \(Y\) has rank \(j\), we write \(<i, j>\). Notice then that for ⍋Y the condition \(<j, i>\) holds—in other words, just swaps \(i\) and \(j\), meaning that ⍋⍋Y leaves \(<i, j>\) unchanged! When \(Y\) is a permutation vector, this makes ⍋⍋ the identity, but in general we can think of it as the function Rank, since it produces a vector of the item ranks.

Better yet, since is equal to ⌽⍋, we can say that \(<N-j, i>\) holds for its result, where N=¯1+≢Y (in ⎕IO←0), and if we think of \(i, j\) as living in \(\mathbb{Z}_N\), this becomes even nicer: \(<-j, i>\). Indeed,

       ↑Y (⍋⍋Y)
 l n p w b k m c x t
 3 5 6 8 0 2 4 1 9 7

Effectively, the negative sign in our \(<\cdot, \cdot>\) symbol just means that the corresponding field starts counting from the back of the array, which we can think of as the function ReverseRank. Armed with this little bit of algebra, we can now fully characterize the fourfold combinations of and :

Input Output Interpretation
\(<i, j>\) ⍋⍋ \(<i, j>\) Ascending Rank
\(<i, j>\) ⍋⍒ \(<i, -j>\) Ascending ReverseRank
\(<i, j>\) ⍒⍋ \(<-i, j>\) Descending Rank
\(<i, j>\) ⍒⍒ \(<-i, -j>\) Descending ReverseRank

Not bad! So the leading Grade chooses ascending or descending and the trailing Grade chooses Rank or ReverseRank. Applying to our example,

       ↑Y (⍋⍋Y) (⍋⍒Y) (⍒⍋Y) (⍒⍒Y)
 l n p w b k m c x t
 3 5 6 8 0 2 4 1 9 7
 6 4 3 1 9 7 5 8 0 2
 7 9 1 4 2 0 8 6 5 3
 2 0 8 5 7 9 1 3 4 6

As a sanity check, note that Rank + ReverseRank should be constant. Indeed, for each item \(y \in Y\), the number of items less than \(y\) plus the number of items greater than \(y\) should equal the total number of items in \(Y\), minus 1 for \(y\) itself:

 9 9 9 9 9 9 9 9 9 9
 9 9 9 9 9 9 9 9 9 9

Actually, there is a bit of nuance when items of \(Y\) are equal. Rank + ReverseRank will not be constant because both and rank equal items the same way, no directionality involved. This property makes having both primitives useful, otherwise (⍋≡⌽∘⍒)Y would always hold. This property is exactly what Paul Mansour exploits in his article above with the AverageRank function.