-1

I'm trying to understand the notion of relative error for vectors in $\mathbb{R}^n$, but it's not "clicking" somehow.

$$\operatorname{\varepsilon-rel}(x_\text{approx}, x) = \frac{||x_\text{approx} - x ||}{||x||}$$

The intuition I have for the relative error between two positive real numbers is that it's the approximation of something symmetric $$\operatorname{\varepsilon-rel}(x_\text{approx}, x) \approx \log(x_\text{approx})-\log(x)$$ which I can think about as a (signed) distance metric.

Here's what I can say about the vector case:

  1. It's invariant under rotation
  2. It's invariant under uniform scaling
  3. If $x$ and $x_\text{approx}$ are collinear, say $x_\text{approx} = \alpha x$, then you recover the relative error

But 3. kinda breaks down when $\alpha < 0$. The univariate relative error is $\infty$ if the signs differ, the multivariate is not.

  1. If the error $x - x_\text{approx}$ is orthogonal to $x$ then $\operatorname{\varepsilon-rel}$ is the $\sin$ of the angle between $x$ and $x_\mathrm{approx}$.

Your thoughts would be much appreciated.

user357269
  • 363
  • 1
  • 8
  • 3
    It's not clear to me what you are looking for in an answer or how you got your second formula. The relative error is simply the magnitude of error scaled by the magnitude of the true vector. An error with a magnitude of 10 is fairly large for a vector with a magnitude of 20, but not for one with a magnitude of 20,000. Relative error gives you a way to describe that difference in the importance of an error. – Tyberius Oct 03 '21 at 15:20
  • @Tyberius: the second formula is for scalars only – user357269 Oct 03 '21 at 16:24
  • But for scalars you can just use the vector formula. I just haven't seen a form of the relative error involving logs, I have only seen the vector formula even for single variables. – Tyberius Oct 03 '21 at 18:22
  • 1
    You seem to think that the relative error is defined as $\log\frac{|\delta x|}{|x|}$, which is equivalent to your formula. But nobody uses this definition. Everyone thinks of the relative error as $\frac{|\delta x|}{|x|}$. – Wolfgang Bangerth Oct 05 '21 at 01:32
  • @WolfgangBangerth: you mean $\delta \log(x)$, right? You don't really have to use it, the values are typically indistinguishable from $\delta x/x$. For large errors they disagree, but then I'd prefer the values of $\delta \log(x)$ anyway cos they're at least symmetric so you don't get things like the error being -50% or +100% depending on the way you look at it. – user357269 Oct 05 '21 at 17:49
  • $\log(x/y)=\log(x)-\log(y)$ in general. Its not that you can't take the log of the relative error, it could be useful in some contexts. But people don't generally do this. Also the second formula in your answer, at least as currently written, has $\log(x_\text{approx})$ rather than $\log(\delta x)$. – Tyberius Oct 05 '21 at 20:32
  • @Tyberius: yes, it's meant to say $\log(x_{\text{approx}})$ (which is a.k.a. $\log(x_{\text{true}}+\delta x)$). I'm not taking log of the relative error – user357269 Oct 07 '21 at 06:43
  • @user357269 Also I'd add one more thing to what I said below. In terms of sign, relative error will always be positive, because by definition a norm must always be positive, and therefore the ratio of two norms will always be positive. (or zero). Relative error, strictly speaking can never be infinity either, as there are no infinitely large real numbers. There is an extension of the reals known as the hyperreals that has them though. You would need a limit sign in there somewhere to have values in the extended real numbers. https://en.wikipedia.org/wiki/Normed_vector_space – David Reed Oct 09 '21 at 03:56

2 Answers2

7

You are overthinking relative error in one-dimension, and I expect that is the source of your confusion.

If I measure the length of an ant, and I am off by 1mm, its a big deal. However, if I were instead to measure the distance from the earth to the moon, and I was off by the same amount, for all practical purposes my measurement is perfect, relative to that distance 1mm is so small it can effectively be considered zero. This is what relative error measures. In both instances the absolute error is the same, however the relative error in the first measurement is much larger than in the second. The extent to which this holds in other vector spaces depends on whether the induced metric for the norm you are using can be fairly said to measure how "closely" one vector approximates another vector. (the induced metric is the value $\Vert v-w \Vert$). In situations where it does relative error represents/measures/means the same thing it does in 1 dimension. In situations where it doesn't the relative error is obviously still well-defined but serves no purpose. It may pop up but in those instances it is no longer referred to as relative error. Sometimes other norms besides the Euclidean Norm will be more suitable for capturing this idea. The importance of the phrase "it depends on the context" in mathematics cannot be overstated.

David Reed
  • 171
  • 2
1

Let $x \in \mathbb{R}^{n}\setminus \{0\}$ be a point and $\mathrm{\delta}x \in \mathbb{R}^{n}$ be a tangent vector at $x$.

The relative error $||\delta x||_x := \frac{||\delta x||}{||x||}$ defines a Riemannian metric on $\mathbb{R}^{n}\setminus \{0\}$.

Case $n = 1$

Let $0 < x_1 < x_2 \in \mathbb{R}$. The shortest path is given by $$x(t) = x_1 + (x_2-x_1) t.$$

Its length is $$ \begin{aligned} \int_0^1 || \dot{x}(t) ||_{x(t)} \mathrm{d}t &= \int_0^1 \frac{x_2 - x_1}{x_1 + (x_2 - x_1) t} \mathrm{d}t \\ &=\log(x_2) - \log(x_1). \end{aligned}$$

It follows that the distance between two points is given by $$ d(x_1, x_2) = |\log(|x_1|) - \log(|x_2|)|$$ if they have the same sign and $\infty$ otherwise. This is a symmetric, non-infinitesimal version of the relative error.

Case n > 1

In this case, the space $\mathbb{R}^{n}\setminus \{0\}$ is the product of the sphere $S^{n-1}$ (with the Euclidean metric) and $\mathbb{R}^+$ (with relative error metric defined above).

For example, when $n=2$, the Riemannian metric for relative error is given by $$ g = \frac{1}{x^2 + y^2} \left( \mathrm{d}x \otimes \mathrm{d}x+ \mathrm{d}y \otimes \mathrm{d}y \right).$$

Or equivalently in polar coordinates: $$ g = \frac{1}{r^2} \mathrm{d}r \otimes \mathrm{d}r+ \mathrm{d}\theta \otimes \mathrm{d}\theta$$ which is the product metric of the relative error on $\mathbb{R}^+$ and arc length on the circle $S^1$.

It follows that the distance between two points is given by $$ d(p, q) = \sqrt{\left( \log ||p|| - \log ||q|| \right)^2 + \left[ \cos^{-1}\left( \frac{p \cdot q}{||p|| \cdot ||q||} \right) \right]^2}.$$ This is a symmetric, non-infinitesimal version of the relative error between two vectors.

user357269
  • 363
  • 1
  • 8
  • Why do you think that being a symmetric function is a desirable property here? Usually when I'm speaking about errors I know which is the correct value and which is the inexact approximation. – Federico Poloni Jan 05 '24 at 23:50