When talking about artificial neurons, inputs, weights and biases, I understand the role of each but the latter.
In short if we have a neuron such as sigmoid(sum(w*x) + b) I get that the weights basically say which of the inputs is more "important", but what about the bias? I've read it in this other question "is as means how far off our predictions are from real values."
But how can this be true, if we start them "at random"? Also this isn`t supposed be be the job of the loss/cost function?