The difference between a classification and regression is that a classification outputs a prediction probability for class/classes and regression provides a value. We can make a neural network to output a value by simply changing the activation function in the final layer to output the values.
By changing the activation function such as sigmoid,relu,tanh,etc. we can use a function ($f(x) = x$). So while back propagation simply derive $f(x)$
For illustration I will provide you the forward and back ward pass for a single layer neural network regression below:
forward pass:
$inputs -> x $
$weights input to hidden -> w1$
$weights hidden to output ->w2$
$z2 = w1*x$
$a2 = sigmoid(z2)$
$z3 = w2*a2$
$a3 = f(z3)$
backward pass:
$targets -> Y$
$f(x) = x -> f'(x) = 1$
$sigmoid'(x) -> sigmoid(x)(1-sigmoid(x))$
$d3 = Y - a3$
$d2 = w2*d3$
$w2' = a2*d3$
$w1' = d2*a2'*x$
Here the d3 and d2 are layer wise errors.
Please make sure the dimensions are properly addressed while implementing in code for the above equations.