8

Are there any well-known optimization libraries (ideally with Python bindings or even in Python) supporting (unconstrained) minimization (of $f:\mathbb{R}^n \to \mathbb{R}$ for $n$ for $n\sim 10^1,10^2$) with support for arbitrary precision input/output?

I have a (mathematical physics) problem where I genuinely want to minimize to very high precision, and e.g. the standard routines of scipy.optimize fail to converge to the precision I want. Any thoughts appreciated -- thanks!

3 Answers3

11

Optim.jl from Julia will work with the number types that you give it, so if you make it use BigFloats then it'll do that. Local derivative based, derivative-free, global, and integrates with automatic differentiation. From Julia, it's just:

using Optim
rosenbrock(x) =  (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
result = optimize(rosenbrock, big.(zeros(2)), BFGS())

and that's using arbitrary precision bigfloats, so then setprecision(512) would be how you set the bit size.

For using it from Python, you can use pyjulia through python3 -m pip install julia and then just do the call:

import julia
julia.install()
from julia import Base
from julia import Optim
def rosenbrock: 
   [(1.0 - x[0])^2 + 100.0 * (x[1] - x[0]^2)^2]

result = Optim.optimize(rosenbrock, [Base.big(0),Base.big(0)], Optim.BFGS())

Should be all it takes? (I didn't double check to run it, but from diffeqpy I have used this a bit and am extrapolating the semantics a bit)

The only other thing I can think of would possibly be something in Boost, since most of Boost is templated.

Chris Rackauckas
  • 12,320
  • 1
  • 42
  • 68
7

Chris code doesn't work on my machine, so here is my solution.

import julia
#julia.install() #<- this is probably needed for the first time
from julia.api import Julia
jl = Julia(compiled_modules=False)
#from julia import Base
from julia import Optim
pyrosen="((x,y),)->(1.0 - x)^2 + 100.0 * (y - x^2)^2; "
jl.eval("setprecision(512)")
result = jl.eval(
    "rosenbrock="+pyrosen
    +"res=Optim.optimize(rosenbrock, [Base.big(0.0),Base.big(0.0)],"
    +"Optim.BFGS(),Optim.Options(g_tol=1e-150))")
args = jl.eval("Optim.minimizer(res)")

Remark 1: you need to set both precisions of BigFloat and Optim.optimize to get the precise result.

Remark 2: starting vector should have BigFloats, not BigInts, so it's important to use big(0.0), not big(0).

urojony
  • 71
  • 1
  • 1
1

For me i managed to get chris's code working in python by doing:

import julia
julia.install()
from julia import Base
from julia import Optim
Base.setprecision(512)

def rosenbrock(x): return (1.0 - x[0])2 + 100.0 * (x[1] - x[0]2)**2

result = Optim.optimize(rosenbrock, [Base.big(0.0),Base.big(0.0)],
Optim.BFGS(),Optim.Options(g_tol=1e-150)) args = Optim.minimizer(result)