31

In optimization literature, I frequently see solution methods termed "exact" or "approximate". (I use the term "method" here because I suspect exactness, or its lack, is a function of both algorithm and model.) My understanding is that "exact" translates to "produces a provably optimal solution" and "approximate" translates to "may or may not find an optimal solution but will not provide proof of optimality". I'd like to know how others view the terms.

The motivation is a review of recent paper submission. In it, we posed a binary linear programming model and a solution procedure that used branch-and-cut with a variant of Benders decomposition. We called the combination an exact method for solving the underlying problem. We also provided computational examples, all but one of which we solved to optimality. In the one remaining case, I cut off the run at 10 hours with an optimality gap of around 12%. The reviewer pointed to that case as an indication the procedure was not "exact". My initial inclination is to dismiss the comment, since branch-and-cut on a bounded ILP will always reach proven optimality given enough resources (time and memory). On the other hand, from a practical standpoint, if you're not going to get a proven optimal solution within your lifetime, perhaps the method really should not be considered exact.

I know we're not supposed to ask for opinions here, so instead I'll ask which is the proper interpretation of "exact" (or whether I'm completely off base in my interpretation of the term).

prubin
  • 39,078
  • 3
  • 37
  • 104
  • 5
    I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm. – Stefano Gualandi Jul 09 '19 at 15:50

5 Answers5

28

Exact: algorithm will eventually provide a provably optimal solution.

Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)

Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.

Finite convergence and runtime are separate from "exactness" properties of an algorithm.

Michael Feldmeier
  • 2,856
  • 14
  • 40
14

An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x* and a guarantee that no other feasible solution has an objective better than that of x*. Typically, exact methods compute two types of bounds: lower (L) and upper (U) bounds. Optimality is then proven whenever both bounds coincide.

In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.

With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other method for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x* and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.

Claudio Contardo
  • 1,574
  • 6
  • 13
13

Exact: Provably optimal

Approximate: offers an upper bound on the gap

I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).

Daniel Duque
  • 1,355
  • 6
  • 20
10

In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.

I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.

LarrySnyder610
  • 13,141
  • 3
  • 41
  • 105
  • 2
    plus, an approximation algorithm must have polynomial runtime – Marco Lübbecke Jul 09 '19 at 17:48
  • 2
    I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic. – prubin Jul 09 '19 at 19:36
  • The requirement for an approximation algorithm to be polynomial is something I haven't seen before and seems highly non-standard to me coming from an algorithmics point of view. For example, there definitely exist approximation algorithms in the sense of this answer that have exponential running time. Also, if approximation algorithms are required to be polynomial time, we wouldn't need add "polynomial time" to terms such as PTAS or FPTAS. Could you provide a reference to some text that uses this additional requirement? – Discrete lizard Jul 10 '19 at 13:28
  • @Discretelizard From Williamson and Shmoys's book on approximation algorithms, p. 14: "Definition 1.1. An $\alpha$-approximation algorithm for an optimization problem is a polynomial-time algorithm that..." I have no idea why the PT- is specified in PTAS or FPTAS -- that's weird, and a good question. – LarrySnyder610 Jul 10 '19 at 13:43
  • @LarrySnyder610 For completeness, I will give a source where polynomial run-time is not a requirement. This is harder to find on the internet than I had expected, but the widely used undergraduate algorithms textbook "Introduction to algorithms" by Cormen et al. (commonly referred to as CLRS) does not require approximation algorithms to run in polynomial time in their definition at the start of the relevant chapter. – Discrete lizard Jul 10 '19 at 14:25
  • Intuitively, there's little reason (in general) to choose a non-polynomial approximation algorithm over a non-polynomial exact algorithm. An approximation scheme is a family of algorithms; I believe the (F)PT indicates that all members of that family run in polynomial-time. – chepner Jul 10 '19 at 16:02
  • @Discretelizard That's interesting. Do they actually discuss approximation algorithms that are not polynomial time, or do they just omit polynomial time in their definition but then still basically assume it? – LarrySnyder610 Jul 10 '19 at 16:24
  • @chepner I agree, but it still seems redundant. If approximation algorithms are expected to be polynomial time (which I would argue they are), then all algorithms in the family are also expected to be poly time, so why call it a PTAS instead of just an AS? – LarrySnyder610 Jul 10 '19 at 16:25
  • The nomenclature is trying to be precise. There's no theoretical reason why an arbitrary approximation scheme couldn't have some algorithms that run in polynomial time, and some that do not. – chepner Jul 10 '19 at 16:30
  • 1
    @LarrySnyder610 The chapter mostly focuses on what they call polynomial time approximation algorithms. But my main objection to the polynomial time requirement in the definition of "approximation" is not that other algorithms are more relevant to study. (not that they are irrelevant, mind you) It is mostly that the runtime of an algorithm is mostly orthogonal of type of output from an algorithm analysis point of view. This definition sounds to me like requiring e.g. in the definition of a randomized algorithm that it also must run in polynomial time. – Discrete lizard Jul 10 '19 at 17:22
  • 1
    Sometimes it makes sense to talk about exponential or FPT algorithms and I find it strange to go out of your way to exclude them. – Discrete lizard Jul 10 '19 at 17:22
  • @Discretelizard I agree, and I think part of what's going on here is that approximation algorithms tend to be studied in the context of theoretical CS (or theoretical OR) -- we want provable guarantees, etc., in which case polynomial runtime is one aspect that we'd like to be able to prove. IMO approx alg's are not used that much in practice; instead, people use heuristics, which may not provide guarantees about either solution error or runtime, and those two performance measures are viewed orthogonally, as you say. – LarrySnyder610 Jul 10 '19 at 18:00
  • @chepner "there's little reason (in general) to choose a non-polynomial approximation algorithm over a non-polynomial exact algorithm" Sorry but that's just not true. An approximation algorithm that runs in time, say, 1.2^n is going to be a lot faster than an exact algorithm that takes time 2^n. – David Richerby Jul 11 '19 at 20:14
  • @LarrySnyder610 Sure, we'd like to prove things about our algorithms. We'd like them to have good error bounds and we'd like them to run quickly. But those are two completely orthogonal things. We'd like to have both but things don't always turn out that way. – David Richerby Jul 11 '19 at 20:16
  • @DavidRicherby By "in general", I meant you're probably not going to see runtimes like that. (At least, not in the types of problems I used to work on.) – chepner Jul 11 '19 at 20:30
  • @DavidRicherby Agreed. Did my comment imply something different? If so, I didn't mean to. – LarrySnyder610 Jul 11 '19 at 23:57
  • @LarrySnyder610 OK -- you seemed to me to be suggesting that polynomial time was so desirable that it's essentially in the definitoin. – David Richerby Jul 12 '19 at 08:44
  • @DavidRicherby No not at all. It is in the definition (at least, the definitions I am familiar with) but I'm not arguing that PT is critically important. For people who study approximation algorithms, PT seems to be an important aspect, but was trying to argue that that is more of a concern in theory than in practice. – LarrySnyder610 Jul 12 '19 at 14:10
7
  • According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".

  • Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.

Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.

So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."

[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.

[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.

Oguz Toragay
  • 8,652
  • 2
  • 13
  • 41