Reading the definition of MLE, it sounds like it is: "Given a likelihood function, estimate the most likely parameters."
When I read that, it sounds like it has the same goal of what backpropagation does in neural networks - find the most likely parameters given the "likelihood function" of the neural network.
My question is: Are these 2 concepts the same except for the following 2 differences?
- MLE can be applied to anything vs. backprop is only for neural networks.
- Backprop works backwards to find the best parameters while MLE works forwards.
Any thoughts would be greatly appreciated - the less technical the better since I am still in the learning phase. Thanks!