I'm trying to understand the notion of relative error for vectors in $\mathbb{R}^n$, but it's not "clicking" somehow.
$$\operatorname{\varepsilon-rel}(x_\text{approx}, x) = \frac{||x_\text{approx} - x ||}{||x||}$$
The intuition I have for the relative error between two positive real numbers is that it's the approximation of something symmetric $$\operatorname{\varepsilon-rel}(x_\text{approx}, x) \approx \log(x_\text{approx})-\log(x)$$ which I can think about as a (signed) distance metric.
Here's what I can say about the vector case:
- It's invariant under rotation
- It's invariant under uniform scaling
- If $x$ and $x_\text{approx}$ are collinear, say $x_\text{approx} = \alpha x$, then you recover the relative error
But 3. kinda breaks down when $\alpha < 0$. The univariate relative error is $\infty$ if the signs differ, the multivariate is not.
- If the error $x - x_\text{approx}$ is orthogonal to $x$ then $\operatorname{\varepsilon-rel}$ is the $\sin$ of the angle between $x$ and $x_\mathrm{approx}$.
Your thoughts would be much appreciated.