26

Let $M$ be a square integer matrix, and let $n$ be a positive integer. I am interested in the complexity of the following decision problem:

Is the top-right entry of $M^n$ positive?

Note that the obvious approach of iterated squaring (or any other explicit calculation) requires us to potentially handle integers of doubly exponential magnitude, i.e., having exponentially many bits. However the problem is easily seen to be in Allender et al.'s "PosSLP" class ("On the Complexity of Numerical Analysis", SIAM J. Comput. 38(5)), and therefore in the fourth level of the counting hierarchy.

1) Is it possible to place this matrix powering problem in a lower complexity class?

2) If not, could it conceivably be PosSLP-hard?

3) I am especially interested in the matrix powering problem for low-dimensional matrices, i.e. up to and including 6x6 matrices. Might the complexity be lower for such matrices?

Joel
  • 497
  • 3
  • 7
  • 4
    Shouldn't the title be changed to "Complexity of matrix powering"? Matrix exponentiation (see e.g. http://en.wikipedia.org/wiki/Matrix_exponential) is generally understood as "A=exp(B)" for matrices A,B. – Martin Schwarz Oct 01 '12 at 07:54
  • I'll edit it. that's a good point, @MartinSchwarz – Suresh Venkat Oct 01 '12 at 08:02
  • If you transform the matrix into PDP-1 form (which for a small matrix and sufficiently high power of n can be considered constant), then you can know the sign of each entry of the diagonal entries trivially. Then it's easy to figure out the remaining two matrix multiplications. – Robert Mason Oct 02 '12 at 21:12
  • @Robert Mason: I'm not exactly sure what you're suggesting. If D is the Jordan canonical form of M, so that M^n = P^(-1) D^n P, then the entries of D will typically be complex algebraic numbers, so what do you mean by their "sign"? I agree you can compute D and P in polynomial time (assuming standard representations of algebraic numbers), but the expression you get for the top-right entry of M^n = P^(-1) D^n P will be an expression involving various algebraic numbers raised to the power n, and I don't see how you can determine the sign of this expression efficiently. – Joel Oct 02 '12 at 22:09
  • P.S. We can't necessarily assume that M is diagonalisable, but even if it is, I still don't see how this approach can yield anything more efficient than PosSLP... – Joel Oct 02 '12 at 22:11
  • You are correct. It somehow slipped my mind that this is only efficient for invertable matrices, which is a tiny subset of the square matrices. – Robert Mason Oct 03 '12 at 12:26
  • 1
    @Robert Mason: I still don't understand -- how/why is this efficient for invertible matrices? (And incidentally, "most" matrices are invertible, rather than the opposite.) – Joel Oct 03 '12 at 19:35
  • From Threshold Circuits for iterated Matrix Product and Powering by Carlo Mereghetti and Beatrice Palano (where the problem for unary powers has been dealt with; see Section 3), it is clear that we need to find $x^N \bmod{\chi(x)}$ where $\chi(x)$ is the characteristic polynomial of the matrix $A$ and we want to find $A^N$. This is so because if we write: $x^N = q(x)\chi(x) + r(x)$ where $deg(r) < deg(\chi)$ then by the Cayley Hamilton Theorem, $A^N = r(A)$. Note that $r(x)$ will have doubly exponential coefficients if $N$ is an $n$-bit number. – SamiD Oct 23 '13 at 19:59

1 Answers1

12

For matrices of sizes $k = 2,3$ the Matrix Powering Positivity Problem is in $\mathsf{P}$ (cf. this paper to appear in STACS 2015)

SamiD
  • 2,309
  • 17
  • 28