I have very simple custom model which I am doing experiment with, I have model which takes one input and produce one output. the model equation is: y = sin(ax + b). (a) and (b) are single learnable variables, where y is the output and x is the input. I am using (target - y)**2 as loss function. The following is the setup. now suppose I am giving sin points as training data. In principle by using backpropagation, (a,b) variables should learn how to approximate sin(2x + 3) function. but what ever I am trying, the result is wrong. given that I am using pytorch's backpropagation. Can somebody explain why it's not working. (I would appreciate mathematical proof for the reason). if you think I am doing something wrong with my implementation in pytorch, can you write your own implementation in pytorch?
Asked
Active
Viewed 4 times