3

In Discovering faster matrix multiplication algorithms with reinforcement learning (Nature, 2022; lightweight intro), the authors used reinforcement learning (an artificial intelligence technique) to devise a new, slightly faster matrix multiplication algorithm.

Could a similar technique work towards a better multiple-precision modular multiplication algorithm, as at the core of RSA and ECC using prime fields, for practical parameters (say, 256 to 4096-bit modulus)?


The question is focused on computing $a\times b\bmod n$ (or as a special case $a\times a\bmod n$ ) for arbitrary $a,b\in[0,n)$ with $n$ of $\ell$ bits (possibly restricted to odd $n$), using standard CPU instructions operating on $w$-bit words. I'm looking for a practical competitor to Karatsuba, Toom-Cook, or perhaps Schönhage-Strassen multiplication (but AFAIK the later is not competitive until much larger $\ell/w$ than used in RSA).

Optimizing modular exponentiation or ECC arithmetic might also be possible, but the question is not about that.

fgrieu
  • 140,762
  • 12
  • 307
  • 587
  • 2
    I think, the problem there one needs good insight to find better equations as Strassen did. As we see, there are similar cases with ECC operations that we can expect similar achievements, – kelalaka Oct 06 '22 at 08:49
  • Is there as nice of a theory of modular multiplication compared to matrix multiplication? In this setting, the problem reduces to finding tensor rank decompositions iirc. Has something similar to this been found for modular multiplication (or polynomial multiplication more generally) – Mark Schultz-Wu Oct 06 '22 at 15:37
  • 1
    Hmm, I don't know how we can square that with side channels and such, at least for private / secret key operations. Surprising results none-the-less and an interesting question. – Maarten Bodewes Oct 06 '22 at 16:14
  • Pretty much all fast multiplication algorithms can be understood as evaluation-multiplication-interpolation, when treating the integers to multiply as polynomials. For example, Karatsuba evaluates at $0, 1, \infty$, another variant of Karatsuba at $0, -1, \infty$. It can get pretty complicated to find the optimal set of points to evaluate, particularly with Toom-Cook, see for example this. – Samuel Neves Oct 07 '22 at 13:55
  • @Samuel Neves: if reinforcement learning could uncover better parameters for a Toom-Cook variant, that would be what I'm dreaming about in this Q. – fgrieu Oct 07 '22 at 14:12

0 Answers0