torch.cuda.comm.gather
-
torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None)[source] -
Gathers tensors from multiple GPU devices.
- Parameters
-
-
tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than
dimhave to match. -
dim (int, optional) – a dimension along which the tensors will be concatenated. Default:
0. - destination (torch.device, str, or int, optional) – the output device. Can be CPU or CUDA. Default: the current CUDA device.
-
out (Tensor, optional, keyword-only) – the tensor to store gather result. Its sizes must match those of
tensors, except fordim, where the size must equalsum(tensor.size(dim) for tensor in tensors). Can be on CPU or CUDA.
-
tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than
Note
destinationmust not be specified whenoutis specified.- Returns
-
-
-
If destination is specified, -
a tensor located on
destinationdevice, that is a result of concatenatingtensorsalongdim.
-
-
-
If out is specified, -
the
outtensor, now containing results of concatenatingtensorsalongdim.
-
-
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.cuda.comm.gather.html