20

Accoring to K. W. Regan's article "Connect the Stars", he mentions at the end that it is still an open problem to find a representation of integers such that the addition, multiplication, and comparision operations are computable in linear time:

Does there exist a representation of integers so that addition, multiplication, and comparison are all doable in linear time? Basically, is there a linear time discretely ordered ring?

(1) How close can we come to linear time multiplication and addition, without compares? Here I assume that the problem sizes may vary, so that we may need a data structure/algorithm that allows for changing integer sizes.

(2) For the complete problem, we can assume that we will find an optimum scheme for multiply, add, and compare on the integers. How close can we get the slowest of these three operations (in the worst case) towards linear time? And on that note, how fast would the other operations be?

FORMAL PROBLEM STATEMENT

As Emil Jeřábek mentions, we'd like to rule out trivial cases and concentrate on worst case behavior for this question.

So we ask, for non-negative integers $\forall x$ and $\forall y$ where $0 \le x < n$ and $0 \le y < n$, can we find a data structure/algorithm that can perform addition, multiplication, and compares with\between $x$ and $y$ in $O(n \log{(n)})$ time and $O(\log^2{(n)})$ space?

Matt Groff
  • 2,100
  • 13
  • 19
  • 1
    I'll mention that it is possible to create a scheme that performs these operations in time $\Theta(n)$ on non-negative integers, where $n$ is the bitsize of the largest integer (assuming we know $n$ ahead of time). I wonder if we can do better, and do this in time proportional to the current integer(s) being computed. – Matt Groff Nov 12 '12 at 23:10
  • Instead of getting linear time for all of these operations, is there a representation of integers so that all of these operations have sub-quadratic time? – Tyson Williams Nov 13 '12 at 01:39
  • 5
    @TysonWilliams: Yes! Binary! – Jeffε Nov 13 '12 at 03:36
  • 2
    Instead of getting linear time for all of these operations, is there a representation of integers so that all of these operations have time $O(n\log n)$? – Emil Jeřábek Nov 13 '12 at 12:40
  • mul(a, b) via add(log(a), log(b)) doesn't count as "linear" time for integers represented by their logarithms? – jnml Nov 13 '12 at 12:57
  • 4
    Actually, there is a trivial positive answer: represent integers in binary with $n^2$ bits of padding. Shouldn’t the statement include some condition to the effect that the representation should have length linear in the length of the binary representation? – Emil Jeřábek Nov 13 '12 at 13:03
  • @jnml: I don't believe it's totally trivial how you implement add(a,b) and less_than(a,b). If you believe you can do all three operations faster than $O(n \log n)$ I wouldn't mind having a rough idea of how you do this. – Matt Groff Nov 14 '12 at 04:36
  • Such a representation is unlikely be a simple one since a simple representation is likely to be convertible to and from binary representation very efficiently and that would improve the upper-bound on the binary multiplication. – Kaveh Nov 14 '12 at 04:43
  • 1
    How do you intend to formally state the question so that it “disallows padding”? How do you define that something is a padding? The question as it stands is ill-posed. – Emil Jeřábek Nov 14 '12 at 11:49
  • 5
    @EmilJeřábek: I assume he wants the representation of any integer $n$ to use $f(n)$ bits, for some function $f(n)=\Theta(\log n)$. – Jeffε Nov 14 '12 at 14:12
  • I asked about uses for this system here: http://cs.stackexchange.com/questions/6667/what-are-the-potential-uses-of-a-good-r-n-s-system – Matt Groff Nov 15 '12 at 01:45
  • this seems similar to the question of whether addition is faster than multiplication. also intuition might suggestion that fourier analysis & convolution might have something to do with this or be a possible "lead", where multiplication is transformed into addition in the frequency domain etc. – vzn Nov 15 '12 at 04:40

1 Answers1

14

Probably not the best answer, but perhaps this is a useful starting point. If we wish to represent a non-negative integer, we can store it as a set of residues modulo sequential prime numbers starting from 2. In this form comparison is potentially hard, but multiplication and addition can be done pretty quickly. The product of the first $n$ primes is approximated by $$p_n\# =\exp((1+\mathcal{o}(1) )n\log n).$$ Hence to represent an integer $N$ you require residues modulo the first $n$ primes, where $N<\exp((1+\mathcal{o}(1) )n\log n)$. Since we can represent any $N<p_n\#$ using residues modulo the first $n$ prime residues, we can take $n \log n \propto \log(N)$. Addition and multiplication can be done directly between pairs of residues. There are $n$ such pairs, with the maximum prime being around $n\log(n)$. Thus addition should be in $O(n\log\log(N))$, while multiplication via Schonhage-Strassen should be in $O(n\log\log(N)\log\log\log\log(N)\log\log\log(N))$. Since $n \log n \propto \log N$, then a rough approximation gives $n$ in $O(\log N/\log\log N)$. This would give the complexity of addition and multiplication as approximately $O(\log N)$ and $O(\log N \log\log\log N \log\log\log\log N)$.

Joe Fitzsimons
  • 13,675
  • 47
  • 91