I have just started studying Linear Regression using Neural Network and I believe this is a naive question.
I have a neural network for a Linear Regression Model with 3 independent variables. I have 1 hidden layer with 3 neurons with linear activations for each neuron. The last neuron is just another linear activation function that sums the three outputs from the hidden layer and compare the predicted value with the actual value.
I am wondering which weights are the final weights between the weights before the hidden layer and the weights after the hidden layer that I can use for the parameters of the Linear Regression Model. Hope to hear some explanations.
