So, I have created a neural network using back propagation and sigmoid function. It seems to work for XOR and images with size of 28×28. However, When I input it 100×100 image the mean-square-error is 0.3 ish.
I am using 1 hidden layer. I basically have two questions:
- Is it possible to use neural network not deep neural network to learn of an image the size of 100×100? If so, could you give me a detailed explanation, I have posted the code that I wrote below, sorry in advance as it is not the best or the cleanest code.
- When passing error gradient in a deep neural network for 2 hidden layers do you pass the hidden gradient calculated by the output layer to the 2nd hidden layer and this 2nd hidden layer then calculates the gradient descent/hidden error gradient for the 1st hidden layer is this correct? Whilst updating the weights.
Here is the link!
A2: Yes, you are correct - that's why it's back propagation.
– Iliyan Bobev Sep 05 '16 at 19:00[ 0.225061 0.82384 0.774939 0.029444 0.979184 0.647941 0.17616 0.647941 0.82384 0.029257 0.979252 0.774939 0.352059 0.774939 0.647941 0.0290735 0.979318 0.82384 ] 0.0474061 0.0474061 0.0474061 0.000142677 7.17506e-05 0.0474061
– Sad.coder Sep 05 '16 at 19:35