-1

The optimization task

The optimization task concerned here is:

A square matrix $A\in \mathbb{R^{5\times 5}}$ satisfies the condition: the elements of the first row all are 1, the elements of the second row are 1,-1,1,-1,1, from left to right. $\text{trace}(A A^{\mathsf{T}}) =28$. Find maximum value of $\det(A)$.

Key points

Gurobi version 11.0

  • Accroding to this answer, we can transform the problem to a bilinear problem by introducing new variables and corresponding constraints. What's the fastest way to convert the problem to bilnear optimize-objective? Any automation tool?

  • gurobipy.GurobiError: Unable to convert argument to an expression How to convert sympy expression to gurobi expression?

  • gurobipy.GurobiError: Invalid argument to QuadExpr multiplication. Does gurobi support up to degree-2 polynomial optimize-objective? Does YALMIP suport it?

Description

I was trying to use gurobi for optimization.

But it error gurobipy.GurobiError: Unable to convert argument to an expression

Set parameter NonConvex to value 2
objective=2*x31*x42*x53 - 2*x31*x42*x55 - 2*x31*x43*x52 + 2*x31*x43*x54 - 2*x31*x44*x53 + 2*x31*x44*x55 + 2*x31*x45*x52 - 2*x31*x45*x54 - 2*x32*x41*x53 + 2*x32*x41*x55 + 2*x32*x43*x51 - 2*x32*x43*x55 - 2*x32*x45*x51 + 2*x32*x45*x53 + 2*x33*x41*x52 - 2*x33*x41*x54 - 2*x33*x42*x51 + 2*x33*x42*x55 + 2*x33*x44*x51 - 2*x33*x44*x55 - 2*x33*x45*x52 + 2*x33*x45*x54 + 2*x34*x41*x53 - 2*x34*x41*x55 - 2*x34*x43*x51 + 2*x34*x43*x55 + 2*x34*x45*x51 - 2*x34*x45*x53 - 2*x35*x41*x52 + 2*x35*x41*x54 + 2*x35*x42*x51 - 2*x35*x42*x53 + 2*x35*x43*x52 - 2*x35*x43*x54 - 2*x35*x44*x51 + 2*x35*x44*x53
constraint=x31**2 + x32**2 + x33**2 + x34**2 + x35**2 + x41**2 + x42**2 + x43**2 + x44**2 + x45**2 + x51**2 + x52**2 + x53**2 + x54**2 + x55**2 + 10
Traceback (most recent call last):
  File "E:\Desktop\gurobipy_demo\gurobi_py_example.py", line 272, in <module>
    m.setObjective(objective, GRB.MAXIMIZE)
  File "src\\gurobipy\\model.pxi", line 1488, in gurobipy.Model.setObjective
  File "src\\gurobipy\\util.pxi", line 44, in gurobipy._simpleexpr
gurobipy.GurobiError: Unable to convert argument to an expression
import gurobipy as gp
from gurobipy import GRB
from sympy import symbols, Matrix, det

Define the size of the matrix

n = 5

Create a symbolic matrix

x = symbols('x1:6(1:6)') matrixA = Matrix(n, n, lambda i,j: x[n*i + j])

Replace the first two rows with the given arrays

print(f"{matrixA[0, :]=}")

matrixA[0, :] = Matrix([[1]*n]) matrixA[1, :] = Matrix([[1, -1, 1, -1, 1]])

Calculate the determinant using sympy

detA = det(matrixA)

Create a new model

m = gp.Model("max_det") m.setParam('nonconvex', 2)

Define the variables

gurobi_vars = matrixA[2:, :].reshape(3*n,1) vars = m.addVars(len(gurobi_vars), lb=-GRB.INFINITY, ub=GRB.INFINITY)

Update model to integrate new variables

m.update()

Define the objective to maximize the determinant

objective = detA.subs({sym: var for sym, var in zip(gurobi_vars, vars.values())}) print(f"{objective=}")

Add the constraint for the trace

AmultTransposeA = matrixA@(matrixA.T) trace = sum(AmultTransposeA[i, i] for i in range(n)) constraint = trace.subs({sym: var for sym, var in zip(gurobi_vars, vars.values())}) print(f"{constraint=}")

m.setObjective(objective, GRB.MAXIMIZE) m.addConstr(constraint == 28, "trace_constraint")

Optimize the model

m.optimize()

Output the solution

if m.status == GRB.OPTIMAL: solution = m.getAttr('x', vars) print("The optimized determinant is: ", m.objVal) for v in vars.values(): print(f"{v.VarName} = {solution[v]}")

# Assuming you have already initialized your Gurobi model 'm' and added variables

# Define your variables in Gurobi
x = m.addVars([(i,j) for i in range(3, 6) for j in range(1, 6)], lb=-GRB.INFINITY, name="x")

# Update the model to integrate new variables
m.update()

# Now, build the objective function using Gurobi's quicksum and the provided expression
objective = (2*x[3,1]*x[4,2]*x[5,3] - 2*x[3,1]*x[4,2]*x[5,5] - 2*x[3,1]*x[4,3]*x[5,2] +
             2*x[3,1]*x[4,3]*x[5,4] - 2*x[3,1]*x[4,4]*x[5,3] + 2*x[3,1]*x[4,4]*x[5,5] +
             2*x[3,1]*x[4,5]*x[5,2] - 2*x[3,1]*x[4,5]*x[5,4] - 2*x[3,2]*x[4,1]*x[5,3] +
             2*x[3,2]*x[4,1]*x[5,5] + 2*x[3,2]*x[4,3]*x[5,1] - 2*x[3,2]*x[4,3]*x[5,5] -
             2*x[3,2]*x[4,5]*x[5,1] + 2*x[3,2]*x[4,5]*x[5,3] + 2*x[3,3]*x[4,1]*x[5,2] -
             2*x[3,3]*x[4,1]*x[5,4] - 2*x[3,3]*x[4,2]*x[5,1] + 2*x[3,3]*x[4,2]*x[5,5] +
             2*x[3,3]*x[4,4]*x[5,1] - 2*x[3,3]*x[4,4]*x[5,5] - 2*x[3,3]*x[4,5]*x[5,2] +
             2*x[3,3]*x[4,5]*x[5,4] + 2*x[3,4]*x[4,1]*x[5,3] - 2*x[3,4]*x[4,1]*x[5,5] -
             2*x[3,4]*x[4,3]*x[5,1] + 2*x[3,4]*x[4,3]*x[5,5] + 2*x[3,4]*x[4,5]*x[5,1] -
             2*x[3,4]*x[4,5]*x[5,3] - 2*x[3,5]*x[4,1]*x[5,2] + 2*x[3,5]*x[4,1]*x[5,4] +
             2*x[3,5]*x[4,2]*x[5,1] - 2*x[3,5]*x[4,2]*x[5,3] + 2*x[3,5]*x[4,3]*x[5,2] -
             2*x[3,5]*x[4,3]*x[5,4] - 2*x[3,5]*x[4,4]*x[5,1] + 2*x[3,5]*x[4,4]*x[5,3])

# Set the objective
m.setObjective(objective, GRB.MAXIMIZE)

# Similarly for the constraint
constraint = (x[3,1]**2 + x[3,2]**2 + x[3,3]**2 + x[3,4]**2 + x[3,5]**2 +
              x[4,1]**2 + x[4,2]**2 + x[4,3]**2 + x[4,4]**2 + x[4,5]**2 +
              x[5,1]**2 + x[5,2]**2 + x[5,3]**2 + x[5,4]**2 + x[5,5]**2 + 10)

# Add the constraint to the model
m.addConstr(constraint == 28, "trace_constraint")

# Optimize the model
m.optimize()

# Output the solution
if m.status == GRB.OPTIMAL:
    solution = m.getAttr('x', x)
    print("The optimized determinant is: ", m.objVal)
    for var in x.values():
        print(f"{var.VarName} = {var.X}")
138 Aspen
  • 197
  • 8

2 Answers2

1

You can apply gurobipy, you just need to approach it differently. How would you code this up for a 100x100 matrix? The current approach using a hardcoded formula is impractical. Instead use the power of recursion:

import gurobipy as gp
import numpy as np

def det(A): v = m.addVar(lb=-float("inf")) if A.shape == (2,2): m.addConstr(v == A[0,0]A[1,1]-A[1,0]A[0,1]) return v expr = gp.QuadExpr() cofactor = 1 for i in range(A.shape[1]): cols = [c for c in range(A.shape[1]) if c != i] expr += cofactorA[0,i]det(A[1:][:,cols]) cofactor = -cofactor m.addConstr(v == expr) return v

def trace_of_A_mult_TransposeA(A): expr = gp.QuadExpr() for i in range(A.shape[0]): expr += A[i]@A[i] return expr

m = gp.Model() x = m.addMVar((5,5), lb=-float("inf")) m.addConstr(x[0] == np.array([1]*5)) m.addConstr(x[1] == np.array([1,-1,1,-1,1])) m.addConstr(trace_of_A_mult_TransposeA(x) == 28) m.setObjective(det(x), gp.GRB.MAXIMIZE)

m.optimize()

138 Aspen
  • 197
  • 8
Riley
  • 156
  • 4
0

My final plan is using MATLAB, YALMIP

The plan really does work.


The optimization task is:

A square matrix $A\in \mathbb{R^{5\times 5}}$ satisfies the condition: the elements of the first row all are 1, the elements of the second row are 1,-1,1,-1,1, from left to right. $\text{trace}(A A^{\mathsf{T}}) =28$. Find maximum value of $\det(A)$.

clear all;close all;clc;

% Define the size of the matrix n = 5;

% Create the symbolic matrix A with YALMIP variables A = sdpvar(n,n,'full');

% Set the first row to all ones A(1,:) = ones(1, n);

% Set the second row to the specified pattern A(2,:) = [1, -1, 1, -1, 1];

% Define the constraint for the trace of A * A' Constraint = (trace(A * A.') == 28);

% Objective function is the determinant of A Objective = for_loop_implemented_det(A); % Directly replacing det(A) here doesn't work, YALMIP can't analyze it

% Options for the solver (optional, depending on the solver you use) % Define solver settings with increased maximum iterations options = sdpsettings('verbose', 1, 'solver', 'BMIBNB', 'debug', 1, 'bmibnb.maxiter', 10000, 'bmibnb.maxtime', 3600);

% Run the optimization to maximize the determinant of A subject to the constraint sol = optimize(Constraint, -Objective, options); % The optmize function wants the second entry objective to be as small as possible, so it takes a negative sign and wants to find the maximum value of the determinant of the matrix A

% Check if the solution is successful if sol.problem == 0 % Solution is found disp('The optimized matrix A is:'); A_value = value(A); disp(A_value); fprintf("det(A)=%g\n",det(A_value)); else % No solution found disp('Failed to optimize the determinant subject to the constraint.'); disp(yalmiperror(sol.problem)); end

%% function definition function det_A = for_loop_implemented_det(A) % Check if A is a square matrix [m, n] = size(A); if m ~= n error('Input must be a square matrix.'); end

% Initialize the determinant to zero
det_A = 0;

% Generate all permutations of the indices
permsMatrix = perms(1:n);
nPerms = size(permsMatrix, 1);

% Construct the determinant calculation
for i = 1:nPerms
    % Get current permutation
    currentPerm = permsMatrix(i, :);

    % Calculate the sign of the permutation (Levi-Civita symbol)
    inversions = 0;
    for j = 1:n
        for k = j+1:n
            if currentPerm(j) &gt; currentPerm(k)
                inversions = inversions + 1;
            end
        end
    end
    sign = (-1) ^ inversions;

    % Create product of elements as per the permutation
    productOfElements = 1;
    for j = 1:n
        productOfElements = productOfElements * A(j, currentPerm(j));
    end

    % Add to the determinant sum
    det_A = det_A + sign * productOfElements;
end

end

138 Aspen
  • 197
  • 8