1

Matrix rigidity was introduced by Valiant in 1977:

The rigidity $Rig_M(r)$ of boolean matrix $M$ over GF(2) is the smallest number of entries of $M$ that must be changed in order to reduce its rank over GF(2) down to $r$.

$Rig_M(r)$ has a deep connection to boolean function complexity & circuit lower bounds by a result of Razborov 1989 see eg [1]. It also has a connection to finding locally correctable codes eg see [2]

It appears to me there is little to no published empirical analysis of $Rig_M(r)$ & that it could give some useful insight into its properties (thinking for example of the transition point research with SAT).

As a start I was interested in analyzing $Rig_M(r)$ using a simple greedy algorithm. The idea is as follows.

  • generate random matrixes $M$.

  • Compute Delta_row(i,j) where i,j are row (vectors) of $M$ and Delta_row(i,j) is the Hamming distance between the two rows

  • likewise calculate the Delta_col(i,j) where i,j are column vectors of $M$ (equivalent to calculating Delta_row(i,j) of the transpose of M).

  • next sort Delta_row(i,j) and Delta_col(i,j) and remove the column or row from $M$ with the "lowest value". (the lowest value is associated with a row or column pair, remove either row or column of the pair).

This greedy algorithm can be used to estimate $Rig_M(r)$ by giving a (fairly tight?) upper bound. One repeatedly reapplies the greedy algorithm until the matrix $M$ has been reduced to size $r$ and counts the sum of the Delta_row/col values of removed rows. and note that upon removing rows or columns, the new Delta_row and Delta_col arrays can be computed efficiently without recalculating the whole arrays.

  1. how far off can the greedy algorithm be from the optimal value $Rig_M(r)$?

  2. what kind of matrices maximize the difference between the greedy algorithm and the optimal rigidity measure?

  3. can a modified local greedy algorithm find the optimal value $Rig_M(r)$

[1] Boolean Function Complexity by Stasys Jukna, 2011, sec 12.8 "rigid matrices require large circuits"

[2] On matrix rigidity and locally self-correctable codes by Zeev Dvir

[3] ECCC reports tagged matrix rigidity

[4] Matrix rigidity talk/overview by Mahdi Cheraghchi

[5] The geometry of matrix rigidity by Landsberg et al

[6] Blog note on Midrijanis paper on Matrix rigidity by Lance Fortnow

[7] Empirical results in CS papers

vzn
  • 11,014
  • 2
  • 31
  • 64
  • 2
    How do you sample your random matrices? – Mahdi Cheraghchi Mar 08 '12 at 19:35
  • entries 1 vs 0 with constant probability eg 50%. can estimate Rig_M using technique for matrices size in the 100s height/width with PC & ruby code. – vzn Mar 08 '12 at 20:20
  • 1
    But in that case the matrix is maximally rigid with probability close to 1, by a counting argument. So estimating rigidity up to a constant factor becomes trivial. – Mahdi Cheraghchi Mar 08 '12 at 22:12
  • 2
    i don't understand the fourth bullet point of your algorithm. the definition of rigidity does not deal with removing rows or columns, but changing entries. i assume you mean changing Delta_row(i, j) entries of one of the two rows i, j that minimize the quantity, so that they become two identical copies of the same row, thereby reducing the rank (same with cols). however at a next step you may need to change both copies of this row to become equal to another row. so you would need to modify 2Delta_row(i, k) entries at that point, and so on. so what actually is your upper bound on rigidity? – Sasho Nikolov Mar 09 '12 at 16:06
  • MCH-- the pt is that the algorithm can estimate Rig_M for any matrix generated in any way. the Rig_M value for many types of matrices is poorly understood & there are not even theoretical estimates in many cases. examples of matrices that would be interesting to probe are eg those generated from NP complete problems, or those from coding theory eg Hadamard/Sylvester matrices etc – vzn Mar 09 '12 at 18:44
  • SN excellent point/nice catch, thx, [full disclosure] that did not occur to me so far & guess the summation step has to be modified to take into account what you point out by multiplying by a factor of "n" to each step where "n" is the number of iterations so far. need to work out how to express that exactly – vzn Mar 09 '12 at 18:50
  • MCH-- suppose I generate a matrix using an NP language/problem. and suppose it has similar empirically measured/estimated rigidity or a near trend to random matrices for matrices of "moderate" size. that seems like a useful/meaningful result, true? that is an old conjecture of mine that NP languages "look" random in some sense that has yet to be quantified... – vzn Mar 09 '12 at 19:03
  • SN further thought on this. it seems all the rows/columns that have been "merged" can be thought of as a single row/column in the future. there are merged columns and unmerged columns. one can in the future change entries on the unmerged columns instead of the merged rows/columns to match the merged-so-far rows/columns, true? – vzn Mar 09 '12 at 19:24
  • 1
    what if two pairs of merged rows become the two closest rows? – Sasho Nikolov Mar 09 '12 at 20:58
  • the idea behind the algorithm was that its greedy & not too complicated. heres another idea-- modify the above algorithm so that it marks/flags cols/rows as merged & only considers merges between either unmerged rows or a merged/unmerged row pair, but never a merged/merged pair. there can be other variations. wonder if anyone has even computed matrix rigidity at all for arbitrary matrices-- have never seen it actually calculated in a paper or seen an algorithm for it in general – vzn Mar 10 '12 at 00:12
  • Of course it makes sense to look at rigidity of specific matrices, like Sylvester. My point is that once you generate a matrix randomly and uniformly, with overwhelming probability you already know its rigidity and there's not much reason to proceed. – Mahdi Cheraghchi Mar 11 '12 at 23:59
  • actually you dont know the rigidity in the sense that the known trends for random matrices are asymptotic & "probably" do not fit the same calculated curve exactly for low values (similar to the bias problem in statistics for small samples). but that comparison cant be done without some algorithm for the computation. apparently nothing other than brute force is known which takes n^t where t is the matrix size. which reminds me of another question on here for worst case P-time algorithms that I might answer now with that. – vzn Mar 12 '12 at 02:32

1 Answers1

6

The following additional references on Matrix Rigidity may be helpful:

  1. Complexity Lower Bounds using Linear Algebra, S. V. Lokam, in Foundations and Trends in Theoretical Computer Science, Volume 4, 1-2, 2009. http://dx.doi.org/10.1561/0400000011

  2. Using Elimination Theory to construct Rigid Matrices, A. Kumar, S.V. Lokam,. V. Patankar, J. Sarma, ECCC TR09-106, http://eccc.hpi-web.de/report/2009/106/

user13909
  • 61
  • 1
  • 2