1

I want to flatten any general n-dimensional torch.Tensor but in a way which is computationally optimized. (By "flatten" here, I mean converting a given Tensor to a one-dimensional Tensor which has the same number of elements as the given vector.) I am using the following steps currently to do so:

local original_tensor = -- output of some intermediate layer of a conv-net residing in the GPU
local shaping_tensor = torch.Tensor(original_tensor:nElement())
original_tensor = original_tensor:resizeAs(shaping_tensor:cuda())

I believe it is slightly inefficient because of :cuda() which pushes this new Tensor from memory to the GPU. Can someone please suggest a more efficient way to do this?

Thanks in advance.

dyno8426
  • 1,039
  • 2
  • 12
  • 20

2 Answers2

2

Typical approach is to create a view (thus not actually reshaping the tensor).

x:view(x:nElement())

which comes directly from official "torch for numpy users" https://github.com/torch/torch7/wiki/Torch-for-Numpy-users

lejlot
  • 61,451
  • 8
  • 126
  • 152
0

Isn't this solved with the reshape command? See the documentation and this example

I assume that you know how to grab the dimensions of original_tensor. Multiply them together to get the vector size.

local my_vector = nn.reshape(vector_size, original_vector)

Am I missing something? Is this still not efficient enough? It should be a highly parallel assignment.

Prune
  • 75,308
  • 14
  • 55
  • 76