0

Consider the following code:

v = torch.tensor([1, 2, 3, 4]).unsqueeze(1)

new_vec = []
for i in range(v.shape[0]):
    new_vec.append(torch.roll(v, -i, dims=0))
new_vec = torch.cat(new_vec, dim=0)

This code takes a tensor [[1], [2], [3], [4]] and transforms this as follows:

[[1], [2], [3], [4]] -> [[2], [3], [4], [1]] -> [[3], [4], [1], [2]] -> [[4], [1], [2], [3]]

and then concatenates all these vectors along the batch dimension.

Note that if the tensor has k values, there will always be k rolls.

Is there any way to do this operation (not sure if it has a name) more efficiently, without the loop?

Susmit Agrawal
  • 3,394
  • 2
  • 12
  • 26

0 Answers0