7

I have a relatively simple convex optimization problem that involves less than 100 variables but contains a terribly ill-conditioned matrix. I have tried CVX and CPLEX; even though both can typically solve the problem in about 1 second, both fail when the condition number of the matrix becomes very large. An arbitrary-precision solver would be able to solve this problem quickly and accurately. Does any such implementation exist?

Note: The conditioning of the problem has been considered in detail and is not part of this question. I'm just asking about software.

David Ketcheson
  • 16,522
  • 4
  • 54
  • 105
  • 3
    If the matrix is terribly ill-conditioned, there's always the possibility that you're "asking the wrong question"; see if you can reformulate your original problem so that you don't have to deal with ill-conditioned matrices. – J. M. Nov 30 '11 at 06:56
  • You are using a penalty method, aren't you? Didn't we have this discussion last week? ;-)

    Can you build any of your favorite optimization packages with __float128? That would probably be enough to handle your penalty.

    – Jed Brown Nov 30 '11 at 07:04
  • 1
    Instead of penalties, could you use constraints and an active set method? – Matt Knepley Nov 30 '11 at 13:54
  • The ill-conditioning is unrelated to the penalty formulation. I originally tried it as a constrained feasibility problem and got worse results. – David Ketcheson Nov 30 '11 at 15:12
  • J.M.: Thanks; I am indeed trying to "ask the question in a different basis"; see my question about orthogonal polynomials: http://scicomp.stackexchange.com/questions/32/polynomials-that-are-orthogonal-over-curves-in-the-complex-plane – David Ketcheson Nov 30 '11 at 15:13
  • I guess my question should be, why is the system so close to singular? If its due to multiple solutions, can it be regularized to pick out one? I am thinking here of what happens when removing highly oscillatory solutions. – Matt Knepley Nov 30 '11 at 16:23
  • The matrix involved is not square, but rather tall and thin. The most obvious formulation of the problem involves a large Vandermonde matrix, but over points in the complex plane. So it's ill-conditioned because I don't know how to choose a polynomial basis that is well conditioned with respect to arbitrary sets of points in the complex plane. – David Ketcheson Nov 30 '11 at 18:01
  • 1
    Have you considered B-splines or radial basis functions for your basis? – rcollyer Nov 30 '11 at 19:41

3 Answers3

3

As far as I know, there is no complete library that does what you are looking for, but parts of it are available. MPMath is an arbitrary precision library for Python that contains a matrix module with linear algebra functionality. It should not be too difficult to implement a convex solver with this library.

Note. I agree with the comments above. If a numerical problem is terribly ill-conditioned, I'd recommend that you understand why, and try to improve the description of the problem to lower the condition number.

  • I know almost nothing about implementing a convex solver, but if somebody wanted to do it that would be wonderful. – David Ketcheson Dec 02 '11 at 15:23
  • I have trouble with this answer only because "implement a convex solver" can range from "let's implement Newton's method" to something like "let's flesh out an SQP algorithm with good initial guess and adaptive step-size heuristics". It's absolutely correct in principle, but nontrivial (and probably slightly time-consuming) in practice. Also, from an optimization perspective, it's probably not research-worthy on its own unless you can demonstrate it on a class of ill-conditioned problems that are important in practice (maybe David's problems would suffice?). – Geoff Oxberry Feb 24 '12 at 03:41
2

''I don't know how to choose a polynomial basis that is well conditioned with respect to arbitrary sets of points in the complex plane.''

If the set of points is bounded, a good basis of polynomials of degree $d$ to use is the Lagrange polynomials of $d+1$ reasonably spaced point along an enclosing contour. This will give you far better results than a multiprecision solution of a problem involving a Vandermonde matrix.

Arnold Neumaier
  • 11,318
  • 20
  • 47
1

Here is complete (with the exception of sparse matrices) package for linear algebra in arbitrary precision:

Multiprecision Computing Toolbox for MATLAB

It integrates smoothly with Matlab and provides routines for all common operations from determinants to SVD and eigenvalues.

Actually it also covers other areas - numerical integration, optimization, ode, special functions, etc.

As for function minimization you could try Nelder–Mead simplex method (fminsearch) implemented in the toolbox too.