What is the tf.stop_gradient() equivalent (provides a way to not compute gradient with respect to some variables during back-propagation) in pytorch?
Asked
Active
Viewed 1.1k times
13
aerin
- 16,939
- 27
- 90
- 123
-
1Do any of these answer your question? https://datascience.stackexchange.com/questions/32651/what-is-the-use-of-torch-no-grad-in-pytorch https://stackoverflow.com/questions/56816241/difference-between-detach-and-with-torch-nograd-in-pytorch/56817594 – Stef Sep 16 '20 at 14:30
2 Answers
20
Could you check with x.detach().
Deepali
- 241
- 1
- 7
-
Link to the documentation : https://pytorch.org/docs/master/generated/torch.Tensor.detach.html – Astariul Apr 19 '21 at 01:56
5
Tensors in pytorch have requires_grad attribute. Set it to False to prevent gradient computation for that tensors.
Shai
- 102,241
- 35
- 217
- 344