Let $f: \Bbb R \to \Bbb R$ be continuous. What are efficient algorithms to finding all the zeros in an interval $[a, b]$? I am actually only interested in the smallest zero in that interval, if there's an algorithm that can provide me with just that.
-
1What do you know about the function? Can you efficiently evaluate $f(x)$? Can you efficiently evaluate $f'(x)$ or $f''(x)$? – Wolfgang Bangerth Jul 08 '22 at 16:47
-
4Separately, the function $f(x)=x\sin(1/x)$ is continuous on $[0,1]$, for example, but it has infinitely many zeros and you will have trouble finding them all. You need to restrict yourself to a smaller class of functions than just continuity. – Wolfgang Bangerth Jul 08 '22 at 16:49
-
we may even suppose $f$ is differentiable, and that there are finitely many zeros in the interval. But I don't know what $f'$ is explicitly – Joe Shmo Jul 08 '22 at 17:00
-
1Do you know the number of zeros ? – nicoguaro Jul 08 '22 at 23:50
-
@nicoguaro typically only a couple. no more than, say, 5. – Joe Shmo Jul 09 '22 at 02:44
-
1In that case you can do an incremental search followed by a secant method. – nicoguaro Jul 09 '22 at 03:13
3 Answers
The question is ill-formulated, in the sense that you will never be able to find the smallest zero for all continuous functions in $[a,b]$, unless you now something more, like for example some derivatives of $f(x)$.
For example, for $x\in[-1,1]$, let's say that you're looking at the function $f(x)=\frac{e^{-\frac{x^2}{2s^s}}}{\sqrt{2\pi s^2}}-1$. For some values of $s$, this function has two zeros, arbitrarily close to $0$. How do you find them without knowing anything about $f$? How do you even know that there are some zeros, if all you can do is sample randomly?
- 436
- 1
- 4
- 14
-
The objection is incorrect. You may suppose there exists at least one zero, if that's what bothers you (ideally the algorithm tells you if a zero doesn't exist, but it's ok if it doesn't). More importantly -- for a fixed $s$, the function $f$ you present does NOT have 2 roots that are arbitrarily close to zero (such a root would be uniquely $0$, perhaps with multiplicity). It does have 2 zeros, symmetrically located to the left and right of the origin, and that's ok. The algorithm should return the smaller of the two -- the negative root. There's no issue with your example. – Joe Shmo Jul 09 '22 at 00:44
-
The function I chose is just a centred (and shifted) normal distribution. It will have exactly two roots if $s<\frac1{\sqrt{2\pi}}$. As we take smaller and smaller $s$, the roots will get closer and closer to $0$. We can choose a $s$ so the absolute values of the roots are smaller than any value we want. – PC1 Jul 09 '22 at 01:42
-
Further, let's have a continuous function $f(x)$ for which $f(a)$ and $f(b)$ are negative. How can you even know that there is at least one zero? How can an algorithm detect that the function is positive over some domain? – PC1 Jul 09 '22 at 01:44
-
the function you describe is $\Bbb R^2 \to \Bbb R$ (a function of $s$, and $x$, since you allow $s$ to vary), whereas I indicated that the function of interest here is $\Bbb R \to \Bbb R$. Your objection is invalid for say, the standard normal, which would be the appropriate example here. – Joe Shmo Jul 09 '22 at 02:22
-
Further, clearly the case when $f(a), f(b) < 0$ is not of interest. I indicated that much in my first comment. I repeat -- you may assume that at least one root exists, that there are finitely many of them, and that the standard IVT conditions are satisfied. – Joe Shmo Jul 09 '22 at 02:24
-
The function I chose is depending on $x$ only, $s$ is a parameter. You fix $s$ and let the algorithm find the roots. I just point out that for some values of $s$, namely when $s$ is very small, the function $f(x)$ becomes very spiky and finding the roots is like finding a needle in a haystack. – PC1 Jul 09 '22 at 02:25
-
$s$ is a parameter to $f$ is another way of saying that $f$ is a function of $s$. If you fix $s$, it seizes being a parameter.
This all beats around the bush -- surely there exists some literature addressing this problem for functions that are reasonable and well behaved enough.
– Joe Shmo Jul 09 '22 at 02:32 -
1I tend to agree with @PC1 - if all we know is that the function is continuous, and it has at least one root in the interval, that's not sufficient for a numerical algorithm to find all its roots in the interval in the general case. If we had some constraints on the derivative of the function that would perhaps allow formulating an algorithm for finding all roots (assuming there is a finite number of them). – Maxim Umansky Jul 09 '22 at 05:05
As others have pointed out, you cannot solve this problem in general for all continuous functions. But there are methods that work quite well in practice. One such method is to sample the function at Chebyshev points, and compute the roots of the interpolating polynomial. Monitoring the decay of the Chebyshev coefficients can be used to decide when the function has been sampled adequately enough. For functions that are sufficiently smooth, you can stop when the coefficients are close to the machine epsilon, for example. This method is implemented in the ‘roots’ function of the Chebfun package (https://www.chebfun.org).
Once you have found the Chebyshev coefficients, you form a generalized companion matrix whose eigenvalues are the roots. You could use a Krylov method to find only the smallest root (as in matlab’s eigs command).
- 1,081
- 5
- 6
-
1+1, this is what I would've added right now if you hadn't. Throw the function into chebfun, let it approximate with machine accuracy, and then find the roots using linear algebra. – davidhigh Jul 12 '22 at 22:19
-
This is precisely what I was looking for. In fact, after posting this question, I began implementing the method you describe. – Joe Shmo Jul 13 '22 at 01:28
Another solution is interval rootfinding, for instance with the interval Newton method. This technique can find all zeros of a given differentiable function while proving automatically at the same time that they are all the zeros in the given interval.
A package to do that is Julia's IntervalRootFinding.jl. All the required derivatives are computed automatically, so you can just provide the function as a formula, e.g., x -> x^2 - 2x.
julia> using IntervalArithmetic, IntervalRootFinding
julia> roots(x -> x^2 - 2x, -10..10)
2-element Vector{Root{Interval{Float64}}}:
Root([-8.17345e-10, 6.8733e-10], :unique)
Root([1.99999, 2.00001], :unique)
The algorithm has identified that the two given intervals (of diameter $\approx 2\times 10^{-9}$, a configurable threshold) each contain a unique zero of the function.
Here is an example with the function mentioned in the other answer:
julia> s = 1e-5
1.0e-5
julia> roots(x -> exp(-x^2/2s^2) / sqrt(2pi*s^2) - 1, -10..10)
2-element Vector{Root{Interval{Float64}}}:
Root([4.60292e-05, 4.60314e-05], :unique)
Root([-4.60313e-05, -4.60295e-05], :unique)
```
- 11,344
- 1
- 31
- 59
-
The function is a black box to me, although from sampling and plotting it (and from the application domain) I am fairly confident that it’s a nice function and is well behaved. – Joe Shmo Jul 13 '22 at 01:29
-
What do you mean by "black-box" exactly? Just to be sure that automatic differentiation is off the table. – Federico Poloni Jul 13 '22 at 09:25
-
I can evaluate $f(x)$ for any $x$, but I don’t know what $f$ itself looks like – Joe Shmo Jul 13 '22 at 11:39
-
But what is in the computer code that you call to evaluate $f$? Library calls? For loops? Code someone else wrote? Is there anything specific that makes it unsuitable for automatic differentiation? – Federico Poloni Jul 13 '22 at 11:42
-
you have an obscure interface to call $f$. Yes, code someone else wrote, but more importantly, you don't have access to the internal implementation. You can sample, however. For autodiff you would need to define $\nabla f$, but you don't have access to $f$, much less $\nabla f$. – Joe Shmo Jul 13 '22 at 13:28
-
the irony is that $f$ is very likely a composition of simple functions (polynomials, and at most exponential and logarithmic functions) – Joe Shmo Jul 13 '22 at 13:29