6

Consider the function $f : \mathbb R \to \mathbb R$, which has a unique fixed point $x^* \in \mathbb R$, such that $x^* = f(x^*)$ and there does not exist another $x \in \mathbb R$ such that $x = f(x)$. If we only knew that $f$ had a unique fixed point, but didn't know the value of $x^*$, could we then compute $x^*$ as the solution of the following optimization problem? $$ \begin{aligned} \max_{x \in \mathcal X} \quad & x \\ \textrm{s.t.} \quad & x \leq f(x) \end{aligned} $$ It seems that the answer is "yes" (but I'd like some feedback on my reasoning): let $x^*$ be the solution to the problem above, such that $$ \begin{align} \forall x \in \mathbb R, x \leq x^* \tag{1} \\ x^* \leq f(x^*) \tag{2} \end{align} $$ Suppose, for the sake of contradiction, that $x^*$ is not the unique fixed point of $f$, such that $x^* \neq f(x^*)$. Then, from $(2)$, $x^* < f(x^*) = v$. However, this contradicts $(1)$, since $v > x^*$. Therefore, $x^* = f(x)$. Does this seem reasonable?


Update

Based on @ConnFus's answer, it must first be proven that there exists one and only one solution $x^*$ to the optimization problem mentioned above. Otherwise, my proof above falls apart, since it rests on the assumption that $x^*$ exists and is unique.

mhdadk
  • 609
  • 2
  • 10
  • If you think for sake of contradiction that x Is a fixed point but not the sole, unique fixed point then you argumentation Is not reasonable. – marco tognoli Dec 21 '23 at 20:36
  • @marcotognoli I’ve edited my wording so that we assume (for the sake of contradiction) that $x^$ is “not a fixed point of $f$”. Previously, we assumed (for the sake of contradiction) that $x^$ is “not the unique fixed point of $f$”. Is this what you meant? – mhdadk Dec 22 '23 at 02:59
  • In my experience it's often easier to solve an optimisation problem that has a complicated objective function but simple constraints, than an optimisation problem that has a simple objective function but complicated constraints. In your problem, $x$ is simple, and $f(x)$ is complicated, and you put $f(x)$ in the constraints. Instead you might want to consider the problem $\min |f(x) - x|$ which should also find $x^*$ (and which easily generalises to functions whose domain is not $\mathbb R$) – Stef Dec 22 '23 at 14:15
  • @Stef: And the choice of norm to be minimized can be made to make the problem nicer for analysis (square of 2-norm is often used, i.e. minimization of square error) – Ben Voigt Dec 22 '23 at 17:56

3 Answers3

4

This is a valid formulation if a global optimizer is used, and it succeeds in finding the global optimum.

A local optimizer might fail to find the global optimum, in which case any solution found might not be a fixed point of the original problem.

Alternatively, you could formulate the optimization problem:

max 0 subject to $x = f(x)$

whose solution, $x$, if found, is the fixed point you seek. It is possible that a local optimizer might fail to find the fixed point, even it it exists, although the local optimizer should not return a solution which is not a fixed point of the original problem.

In summary, your formulation might produce a non-locally optimal solution which is not a fixed point of the original problem. The alternative formulation in my answer should not return a solution which is not a fixed point of the original problem, but if a global optimizer is not used, might fail to find a solution.

Edit: Note that even if $f(x)$ is convex, the constraint $x = f(x)$ is non-convex (unless $f(x)$ is affine (linear)). Optimization solvers would generally have an easier time dealing with the convex constraint, $x \le f(x)$, than the non-convex constraint, $x = f(x)$, even if it is known that $x = f(x)$ is satisfied at the solution.

Mark L. Stone
  • 13,392
  • 1
  • 31
  • 67
  • Thanks for the feedback. I'm guessing if $f$ is concave (such that the feasible set $x - f(x) \leq 0$ is convex), then a local optimizer should successfully find the fixed point, right? – mhdadk Dec 21 '23 at 14:07
  • 1
    Generally speaking yes, but if the function is numerically poorly behaved or the optimizer is low quality, it might not succeed, even though it probably shouldn't producer a solution which is not globally optimal;. But there are many crummy optimization solvers, so no guarantees on any of that. – Mark L. Stone Dec 21 '23 at 14:18
4

If we let $f(x):=e^x-1$, then it has the unique fixed point $x^*=0$.

However for all $x\in\mathbb R$ it holds that $x\le f(x)$, so the the optimization problem \begin{aligned} \max_{x \in \mathcal X} \quad & x \\ \textrm{s.t.} \quad & x \leq f(x) \end{aligned} has no solution.


Addendum:

Let's add the constraint that $f$ is continuous, because otherwise I think this is a lost cause. Now, let $x^*$ be the unique fixed point. Then we have either $$\forall z>x^*:\quad z<f(z)\tag{1}$$ or $$\forall z>x^*:\quad z>f(z)\tag{2}$$In case (2) we are indeed done, but in case (1) the optimization problem misses the fixed point.

So let's look at case (1): In it, we have two subcases: $$\forall z<x^*:\quad z<f(z)\tag{1.1}$$ $$\forall z<x^*:\quad z>f(z)\tag{1.2}$$

In subcase (1.2) we can change the optimization problem to \begin{aligned} \min_{x \in \mathcal X} \quad & x \\ \textrm{s.t.} \quad & x \leq f(x) \end{aligned} and are done.

In subcase (1.1) we have that $\forall z\in\mathbb R:\quad z\le f(z)$ with equality only in $z=x^*$. So we can change the optimization problem to \begin{aligned} \max_{x \in \mathcal X} \quad & x-f(x) \\ \end{aligned} and are done.

So, in the end you have to handle 3 cases, artificially restrict $f$ to be one of those cases, or use a different optimization problem altogether.

ConnFus
  • 56
  • 3
  • Good point. I think requiring that $f$ also be surjective on $\mathbb R$ in addition to having a unique fixed point should fix this, since the set ${x \in \mathbb R \mid x \leq f(x)}$ should always be bounded and non-empty when $f$ is surjective, and so your counterexample would no longer apply. – mhdadk Dec 23 '23 at 05:01
  • @mhdadk If you want to add constraints on $f$ such that your statement works, they have to imply (2). Surjectivity doesn't do that, which we can see by looking at an example for case (1.1), e.g. $$f(x)= \begin{cases} e^x-1,&x\ge 0 \ x/2,& x< 0 \end{cases} $$, which is surjective and has ${x \in \mathbb R \mid x \leq f(x)}$ unbounded. – ConnFus Dec 23 '23 at 07:03
  • Thanks again! Just a few questions: (1) the reason you are done in case $(2)$ is because, if $x^* = \max{x \in \mathcal X \mid x \leq f(x)}$, then case $(2)$ implies that there would not exist another point in the feasible set that is larger than $x^*$, correct? – mhdadk Dec 23 '23 at 07:38
  • (2) Similarly, the reason you are not done in case $(1)$ is that there would exist other points in the feasible set ${x \in \mathbb R \mid x \leq f(x)}$ that are larger than $x^*$, correct? – mhdadk Dec 23 '23 at 07:40
  • 1
    (3) Where did you use the assumption that $f$ is continuous in your argument? – mhdadk Dec 23 '23 at 07:42
  • 1
    In regards to (1) and (2) of your questions: Yes, that is precisely correct. In regards to (3): Without continuity the case distinction into (1) and (2) doesn't work anymore; Let's assume $f$ were continuous but neither case (1) or (2) would apply (that is, we have some $z_1,z_2>x^$ with $z_1<f(z_1)$ and $z_2>f(z_2)$). By the intermediate value theorem we then know that at some point $z>x^$ there has to hold $z=f(z)$, i.e. we have a second fixed point. $$$$ However, if we assume that case (2) holds directly, we indeed don't need that $f$ is continuous. – ConnFus Dec 23 '23 at 07:51
  • Actually, one last question about "Without continuity the case distinction into (1) and (2) doesn't work anymore": because you have already assumed that $x^$ is the unique* fixed point of $f$, then surely isn't it the case that for every $x \neq x^$, either $x < f(x)$ or $x > f(x)$, of which your $(1)$ and $(2)$ are special cases? That is, you still won't need the assumption of continuity? Your continuity argument seems only necessary if we do not already assume that $x^$ is the unique fixed point. – mhdadk Dec 23 '23 at 12:29
  • No, without continuity we can let $f(x)$ jump between the sides of $g(x):=x$ without touching the graph of $g(x)$. Take for example the piecewise function \begin{cases} 0 & x=0 \ \lfloor x\rfloor +2 & x\neq 0\land (\lfloor x\rfloor \bmod 2)=1 \ \lfloor x\rfloor -1 & x\neq 0\land (\lfloor x\rfloor \bmod 2)=0\end{cases} It has unique the fixed point $x^*=0$, but over every interval $[n,n+2]$ for $n\in\mathbb Z$ it has both some $x,x'\in [n,n+2]$ with $x < f(x)$ and $x' > f(x')$. – ConnFus Dec 23 '23 at 13:35
  • Here is the visualization of the plot: https://www.wolframalpha.com/input?i=Plot%5B%7BPiecewise%5B%7B%7B0%2Cx%3D%3D0%7D%2C%7BFLOOR%5Bx%5D+%2B+2%2C+x%21%3D0+%26%26+Mod%5BFloor%5Bx%5D%2C+2%5D+%3D%3D+1%7D%2C+%7BFloor%5Bx%5D+-+1%2Cx%21%3D0+%26%26+Mod%5BFloor%5Bx%5D%2C+2%5D+%3D%3D+0%7D%7D%5D%2Cx%7D%2C%7Bx%2C-5%2C5%7D%5D. Though the plot for some reason doesn't show the value $f(0)=0$... – ConnFus Dec 23 '23 at 13:36
0

Let $f$ be a real function of real variable defined as $f(x)=x$.

Let consider $X_1=[0 , 0.5]$ and $X_2=[0.5 , 1]$.

$f:X_1 \rightarrow X_1$ has as solution of the optimization problem $x_1=0$. $f:X_2 \rightarrow X_2$ has as solution of the optimization problem $x_2=1$ In both cases, $x_1=0.5$ and $x_2=1$ are fixed points: $f(0.5)=0.5$ and $f(1)=1$. These two fixed points solve the optimization problem and are not unique! Now, take $X$ as the Union of $X_1$ with $X_2$ ... It comes out that $x_2=1$ is the unique solution of the optimization problem in which we seek for the maximum fixed point. Actually, the function $f$ has more than one fixed point, so we have showed that the following proposition: If $x^*$ is a solution of the optimal problem then $x^*$ is the unique fixed point for $f$ is false in general. As a consequence, the problem in finding fixed point is not equivalent to solve an optimization problem.

marco tognoli
  • 413
  • 2
  • 7
  • 2
    I’m not sure what you mean here, since your example of $f$ does not have a single unique fixed point. This is the main assumption in my question. Could you please elaborate? – mhdadk Dec 22 '23 at 03:00
  • This does not appear to answer the question that was asked. The question states "If we only knew that $f$ had a unique fixed point, [...] could we [...]?" Your $f$ does not have a unique fixed point, so it is not relevant to the question that was asked. – D.W. Dec 22 '23 at 04:35
  • @mhdadk Question: "If we only knew that $f$ had a unique fixed point, [...] could we [...]?" My answer gives a counterexample. In general, it is not true that one can search a sole fixed point by solving an optimization problem and vice versa. – marco tognoli Dec 22 '23 at 06:07
  • How is this a counter-example? Your "counterexample" does not have a unique fixed point, so it is immediately not relevant to my question. Your counterexample would need to have a unique fixed point and at the same time show that this fixed point is not computable as the solution of an optimization problem. – mhdadk Dec 22 '23 at 06:17
  • How imy counterexample is not relevant to your question? Your question is not well-posed. By the way, have you considered the Union of $X_1$ and $X_2$? How many fixed points do you count? If you can not accept answer you do not like, please avoid to ask question. All the best. – marco tognoli Dec 22 '23 at 06:26