4
lower_bounds = torch.max(set_1[:, :2].unsqueeze(1), 
                         set_2[:, :2].unsqueeze(0))   #(n1, n2, 2)

This code snippet uses unsqueeze(1) for one tensor, but unsqeeze(0) for another. What is the difference between them?

iacob
  • 14,010
  • 5
  • 54
  • 92
  • 3
    Does this answer your question? [What does "unsqueeze" do in Pytorch?](https://stackoverflow.com/questions/57237352/what-does-unsqueeze-do-in-pytorch) – akshayk07 Jun 19 '20 at 08:04
  • PyTorch supports numpy style [broadcasting semantics](https://pytorch.org/docs/stable/notes/broadcasting.html). This explains why you get the observed shape of `lower_bounds` when the two arguments are unsqueezed along different dimensions. – jodag Jun 19 '20 at 13:54
  • The parameter is the direction to add the dimension in. See the documentation for more info. – muman Jun 19 '20 at 13:55

1 Answers1

2

unsqueeze turns an n-dimensionsal tensor into an n+1-dimensional one, by adding an extra dimension of zero depth. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be "unsqueezed"), this needs to be specified by the dim argument.

Hence the resulting unsqueezed tensors have the same information, but the indices used to access them are different.

Here is a visual representation of what squeeze/unsqueeze do for an effectively 2d matrix, where it is going from a 2d tensor to a 3d one, and hence there are 3 choices for the new dimension's position:

enter image description here

iacob
  • 14,010
  • 5
  • 54
  • 92