-1

I have a problem that can be solved with linear programming, but I'm hoping there's a combinatorial algorithm for this (even approximation is fine).

This is basically a load balancing problem using concepts from the set cover problem. Let $U$ be the set of elements and $S$ be the set of sets whose union is $U$ (just like in set cover). Let $x_i$ be a weight associated with $s_i \in S$, where $i \ge 1$, so $x_0$ is not an actual weight. Then the LP I am solving is as follows:

$$ \min x_0 \\ \sum_{i:u \in s_i}x_i \le x_0 \quad \forall u \in U \\ \sum_{i \ge 1} x_i = 1 \\ x_i \ge 0 $$

Intuitively, I am trying to "weight" my sets such that no single element gets overloaded. I am minimizing the maximum weight of some element. The weight of an element is the sum of the weights of the sets that contain that element.

rface
  • 1
  • I guess an easy approximation solution is just to assign equal weight to all sets, but I'm hoping to do better than that... – rface Aug 01 '14 at 20:51
  • If it helps, all sets $s_i$ happen to have the same size. – rface Aug 01 '14 at 23:39
  • Check the section on "Maximum Fractional Bipartite Matching" in the answer to this question: https://cstheory.stackexchange.com/questions/4697/toy-examples-for-plotkin-shmoys-tardos-and-arora-kale-solvers/14388#14388 . Briefly, you have have a packing problem (packing sets under elements), which can be approximately solved by a greedy / Lagrangian-relaxation algorithm. E.g., initialize each $x_i = 0$, maintain a weight on each element $u$ equal to $w_u(x) = (1+\epsilon)^{\sum_{i : u\in s_i} x_i}$, repeat: choose a set $s_i$ minimizing $\sum_{u\in s_i} w_u(x)$; increase $x_i$ by 1... – Neal Young Sep 25 '18 at 01:11

1 Answers1

0

Here is an approximate solution which is faster than LP (perhaps). For simplicity, let's assume $U = \{1, 2, 3, ..., n\}$, $|S| = m$, $|S_i| = m_i$. Let $\mathbf{C}$ be a matrix where $C_{ji} = 1 \iff j \in S_i$. Let $\mathbf{h} = \mathbf{C} \mathbf{x}$ and your problem will be $$ \min \parallel \mathbf{h} \parallel_{\infty} \\ \mathbf{1}^T \mathbf{x} \ge 1 \\ $$

Now we have $\parallel \mathbf{h} \parallel_{\infty} \le \parallel \mathbf{h} \parallel_p$ and will use the $p$-norm. To minimize $\parallel \mathbf{h} \parallel_p$ we could minimize $\parallel \mathbf{h} \parallel_p^p$.

Let $\lambda$ be the dual variable and from the lagrangian we get:

$$ L(\mathbf{x}, \lambda) = \sum_{j = 1}^{n} h_j^p + \lambda (1 - \mathbf{1}^T \mathbf{x} ) $$ and the gradient says $$ \frac{\partial L}{\partial x_i} = p h_j^{p - 1} x_i c_{ji} -\lambda = 0. $$

Solving this equation is hard. Two alternatives here:

  1. Coarse approximation: Lets assume the balancing takes place perfectly and all $h_j$ are roughly equal. Then, $x_i = c / m_i$ where we find $c$ such that $\sum_i x_i = 1$.
  2. Run a fixed point iteration: Assume some value for $\mathbf{h}$. Find $\mathbf{x}$ from above, then find $\mathbf{h}$ and so on until it converges. It might not converge for large values of $p$, but my guess is that it does for small ones.
Ehsan
  • 31
  • 3