I am following the OpenAI's spinning up tutorial Part 3: Intro to Policy Optimization. It is mentioned there that the reward-to-go reduces the variance of the policy gradient. While I understand the intuition behind it, I struggle to find a proof in the literature.
Asked
Active
Viewed 460 times
2
nbro
- 40,472
- 12
- 105
- 192
sirKris van Dela
- 21
- 2
-
1Does the answer to this question answer yours as well? – user5093249 Jun 10 '20 at 13:55
-
No, the linked question only proofs that the reward-to-go does not introduce any bias to the gradient estimate. – sirKris van Dela Jun 10 '20 at 14:14
-
This is nontrivial to prove, actually anything involving stochastic function approximation is nontrivial. You can search research papers, you won't find it in any book right now – FourierFlux Jun 10 '20 at 14:33