36

Let

$L = \{ n : \text{the }n^{th}\text{ binary digit of }\pi\text{ is }1 \}$

(where $n$ is thought of as encoded in binary). Then what can we say about the computational complexity of $L$? It's clear that $L\in\mathsf{EXP}$. And if I'm not mistaken, the amazing "BBP-type" algorithms for computing the $n^{th}$ bit of $\pi$ using quasilinear time and $(\log n)^{O(1)}$ memory, without needing to compute the previous bits, yield $L\in\mathsf{PSPACE}$.

Can we do even better, and place $L$ (say) in the counting hierarchy? In the other direction, is there any hardness result whatsoever for $L$ (even an extremely weak one, like $\mathsf{TC}^0$-hardness)?

An interesting related language is

$L' = \{ \langle x,t\rangle : x\text{ occurs as a substring within the first }t\text{ digits of }\pi \}$

(where again, $t$ is written in binary). We have

$L' \in \mathsf{NP}^L$

and hence $L' \in \mathsf{PSPACE}$; I'd be extremely interested if anything better is known.

Scott Aaronson
  • 13,733
  • 2
  • 62
  • 68
  • Interesting, but why are you interested in computing the bits of $\pi$ instead of computation-related math constants (such as computing the first $n$ bits of Chaitin's $\Omega$)? – Mohammad Al-Turkistany Mar 28 '14 at 17:18
  • 11
    (1) Because $\pi$ is the most famous transcendental number, and a lot is known about it. (2) Because I wanted a concrete example. (I would, of course, also be very interested in the analogous questions for $e$, $\sqrt{2}$, etc., to whatever extent the answers differ.) (3) Because, for Chaitin's $\Omega$, I already know the answer: namely, computing the $n^{th}$ binary digit is uncomputable! (And I'm guessing it's possible to give a reduction showing that the subsequence search problem is uncomputable as well for $\Omega$ ... anyone see how?) – Scott Aaronson Mar 28 '14 at 17:24
  • 6
    @ScottAaronson, I think we can iterate over all strings $x$ of length $t$ and ask if $\langle x, t \rangle$ is in the language; this gives us all of the first $t$ bits of $\Omega$. – usul Mar 28 '14 at 18:11
  • 1
    Is the question for algebraic numbers (e.g. $\sqrt{2}$) really analogous? We can compute the $n$th bit ($n$ given in binary) of an algebraic number via Newton's method in $O(n)$ iterations. On the other hand, I guess a single iteration may take $O(2^n)$ time just to write the output bits. – Huck Bennett Mar 28 '14 at 18:18
  • 4
    I have a similar "number-theory-style" language: $L = { n \mid \text{ the second lower bit of the }n\text{-th prime number is } 1}$ :-) – Marzio De Biasi Mar 28 '14 at 19:01
  • 2
    It seems that you are essentially asking about the complexity of real number Pi in binary representation. There might be something about it in Ker-I Ko or Weirauch's book. ps: generally infinite binary representation is not a very good one for working with real numbers. – Kaveh Mar 28 '14 at 19:42
  • usul: duhhhh, yes... – Scott Aaronson Mar 28 '14 at 20:31
  • 1
    @Kaveh: Well, the fact that it's not a "very good" representation is precisely what might make it more interesting complexity-theoretically! – Scott Aaronson Mar 28 '14 at 20:33
  • 1
    Would be interesting to hear ideas on hardness of this language for $e$ ... I just wonder if oracle access to bits of $e$ might be more useful somehow. – usul Mar 28 '14 at 22:55
  • 3
    By the way, I checked Weihrauch, at the end of section 7.2 it states that n-th bit of trigonometric functions and their inverses can be computed in time $t_m(n) \lg n$ using the signed-digit representation (allowing $-1$ in addition to $0$ and $1$ as a digit) on compact subsets of their domain. ($t_m$ is the complexity of binary integer multiplication.) – Kaveh Mar 29 '14 at 05:25
  • On Google books. He gives the following article as the reference: R.P. Brent, "Fast multiple-precision evaluation of elementary functions", JACM, 23(2):242-251, 1976. – Kaveh Mar 29 '14 at 10:04
  • the Borweins are famous for doing a lot of large calculations of Pi based on most efficient algorithms possible but were mathematicians who afaik never related it exactly to TCS complexity classes. iiuc there are certain formulas such that digits are computed in nearly linear time. overall think this is an excellent area for TCS study & maybe new TCS/math bridge thms. – vzn Mar 29 '14 at 15:40
  • 8
    http://www.cs.nyu.edu/exact/doc/pi-log.pdf – sdcvvc Mar 29 '14 at 19:44
  • similar MO question with refs complexity of computing sin(x) to t bits of precision, its apparently closely connected to multiplication complexity. lead answer cites Brent/Zimmerman online ref. also wikipedia has lots on most known algorithms under approximations of Pi – vzn Apr 01 '14 at 18:47

1 Answers1

30

OK, James Lee has pointed me to this 2011 paper by Samir Datta and Rameshwar Pratap, which proves that my language $L$ (encoding the digits of $\pi$) is in the fourth level of the counting hierarchy ($\mathsf{PH}^{\mathsf{PP}^{\mathsf{PP}^{\mathsf{PP}}}}$; thanks to SamiD below for pointing out a missing $\mathsf{PP}$ in the paper, which I'd simply repeated in my answer!). The paper also explicitly discusses my question of lower bounds on the complexity of computing the binary digits of irrational numbers, though it only manages to prove a very weak lower bound for computing the binary digits of rational numbers. This is exactly what I was looking for.

Update (April 3): An amusing consequence of the digits of $\pi$ being computable in the counting hierarchy is as follows. Suppose that $\pi$ is a normal number (whose binary expansion converges quickly to "effectively random"), and suppose that $\mathsf{P} = \mathsf{PP}$ (with the simulation involving only a small polynomial overhead). Then it would be feasible to program your computer to find, for example, the first occurrence of the complete works of Shakespeare in the binary expansion of $\pi$. If that sounds absurd to you, then maybe it should be taken as additional evidence that $\mathsf{P} \ne \mathsf{PP}$. :-)

Scott Aaronson
  • 13,733
  • 2
  • 62
  • 68
  • OK, but it says I have to wait 5 hours before doing so! – Scott Aaronson Mar 30 '14 at 10:18
  • 7
    BTW, The paper mentioned above essentially reduces the problem to $\mathsf{BitSLP}$ and erroneously quotes the bound as $\mathsf{PH^{PP^{PP}}}$. The best known bound is currently $\mathsf{PH^{PP^{PP^{PP}}}}$ as shown here: http://eccc.hpi-web.de/report/2013/177/ – SamiD Mar 31 '14 at 16:26