6

My question is about a Linear Program (LP) of the form $\bf Ax\ge b$ with $\bf x\ge0$.

From a theoretical standpoint: Given a feasible solution $\mathbf{x^{(0)}}$, how can we check (verify) whether it is an extreme point?

I found this on the web, but I don't know if this is correct.

Can anyone please comment on the above method? Or aware of other methods to check whether a point $\mathbf{x^{(0)}}$ is an extreme point of the LP feasible space?

2 Answers2

4

The page you linked is correct. Note that $Ax\ge b$ there includes any nonnegativity constraints, so their $A$ combines your $A$ and an identity matrix.

For a feasible solution $x\in\mathbb{R}^n$ to be an extreme point, there must be at least $n$ bounding hyperplanes of the feasible region that pass through $x$ (meaning at least $n$ constraints, including sign constraints, that are binding at $x$), and $n$ of those hyperplanes must have linearly independent normals. The constraints corresponding to those $n$ hyperplanes are satisfied as equalities (since they are binding) and have rank $n$ (since the constraints intersect only at one point, namely $x$).

prubin
  • 39,078
  • 3
  • 37
  • 104
  • Thanks.. Yes, my thoughts were similar. At extreme points, for the $n$ bounding constraints, slack variables are 0.. But at non-extreme points, one or more of these slacks kick in (become non-zero), so we need $t$ equations to find the point (since we now need to find the values of $t$ variables) where $t \ge n+1$. – Shuxue Jiaoshou Oct 30 '22 at 20:03
  • Some relevant publications: (1) Complexity of Linear Programming (Page 6), and (2) Algorithms in Combinatorial Geometry by Edelsbrunner (Ch. 10) where he says "the problem is equivalent to one of determining whether a given half-space of an LP is redundant". – Shuxue Jiaoshou Oct 31 '22 at 07:34
  • Another thought - Show that there is a vector $D \in \mathbb{R}^n$ such that the objective function $DX = \sum_{j=1}^n d_jx_j$ achieves either a unique maximum or unique minimum at the given point $X^{(0)}$ (among all points in the original LP feasible region). – Shuxue Jiaoshou Oct 31 '22 at 10:01
3

In

https://scholar.google.com/citations?view_op=view_citation&hl=da&user=kEOeI2gAAAAJ&cstart=20&pagesize=80&citation_for_view=kEOeI2gAAAAJ:KlAtU1dfN6UC

Nimrod Megiddo proves that if you have a primal dual optimal solution you can find an optimal basis solution in strongly polynomial time. A basis solution specifies a vertex solution.

Moreover, unless there is a strongly polynomial algorithm for LP then you really need both a primal and dual optimal solution for that to be true.

Hence, you can use the idea of Megiddo and I doubt that can be improved much since it is strongly polynomial.

ErlingMOSEK
  • 3,166
  • 10
  • 21