3

I am doing a simple neural network regression and I notice my predictions always have high variation at the edges (values 0 and 1, in a normalized case). An image of the true value versus predicted is as shown here: enter image description here

I am using linear activation as the output activation function, and I cannot fathom what could cause this. Can anyone suggest A) what causes this? and B) what could be done to avoid this?

I see similar post here, but can't say I understand the answer provided: Shape of confidence interval for predicted values in linear regression

dipetkov
  • 9,805
Suhanya
  • 31
  • How do you normalize? – frank Apr 28 '22 at 10:01
  • I rescale all the input between x_{min} and x_{max}. In my case, I am predicting angles, so I fix x_{min}=0, x_{max}=360. so x_norm=(x-x_{min})/( x_{max}-x_{min}) – Suhanya Apr 28 '22 at 10:05

1 Answers1

1

In general, for debugging, you could have a look at the data that is so badly predicted to get more insight.

In this case, it looks like this has to do with the fact that angles are not really an interval but rather a circle where you identify $0$ and $2\pi$ ($0$ and $360$ degrees).

If you have small angles near $0$, maybe, because of noise, they could actually be large angles near $2\pi$ that were just shifted by noise. This will confuse the neural network and it will not give good predictions near $0$.

So maybe it is better to predict a point on the unit circle, i.e. change your response variable, a scalar angle, into a point on the unit circle, a two-dimensional vector.

frank
  • 10,797