If one generates an $n\times n$ Haar random unitary $U$, then clearly $\Pr(U=I)=0$. However, for every $\epsilon>0$, the probability $$\Pr(\|U-I\|_{\rm op}<\varepsilon)$$ should be positive. How can this quantity be computed?
-
Exatly? Approximately? Upper/lower bounds? Analytically? Numerically? – Norbert Schuch Aug 14 '20 at 11:20
-
3https://arxiv.org/abs/1506.07259 could contain some useful information or techniques (they compute said quantity, but for a different distance measure). – Norbert Schuch Aug 14 '20 at 11:27
-
Let's say we want an analytical lower-bound? – Calvin Liu Aug 15 '20 at 15:08
-
Are you saying this randomly, or is this what you really care about? Also, do you care about some specific $n$, any $n$, or maybe the behavior for large $n$? (I think if there is a motivation behind this question, it would be helpful to know it.) – Norbert Schuch Aug 15 '20 at 15:26
-
Note that the operator norm distance is smaller than the Frobenius distance, to the corresponding epsilon-ball is larger. So an exact number or lower bound derived in the paper I quoted above will also give lower bounds for the operator norm distance. – Norbert Schuch Aug 15 '20 at 15:27
-
I specifically care about an analytical lower-bound for n=4. – Calvin Liu Aug 15 '20 at 15:39
-
Ah! Then why don't you say that? -- Have you checked if said paper gives bounds/results for n=4? -- I guess you care about small $\varepsilon$? – Norbert Schuch Aug 15 '20 at 16:41
-
What values of $\epsilon$ are you interested in? On average $U$ and $I$ are going to be almost maximally far apart, i.e. the expected value of the operator norm will be almost maximal $\mathbb{E}U[|U-I|\infty]\approx 2$, and Haar random unitaries exponentially concentrate around the average. You can write down lower bounds on the probability you're interested in, they'll just be extremely small for values of $\epsilon$ a little less than 2. – 4xion Aug 25 '20 at 01:38
-
1@CalvinLiu I'll leave this as a comment having not carefully worked out the details, but you can think of this probability as computing the ratio of the volume of an $\epsilon$-ball around $I$ to the volume of the unitary group (with respect to the operator norm), equivalently you can think about this as the 1/size of an $\epsilon$-net for $U(n)$. Very roughly, this is going to be ${\rm Pr}(|U-I|_\infty \leq \epsilon) \sim (n/\epsilon^2)^{-n^2}$. – 4xion Aug 25 '20 at 15:36
-
1If you need a rigorous lower bound you should be able to compute this more precisely (look up refs related to volumes of balls in the unitary group and $\epsilon$-nets) – 4xion Aug 25 '20 at 15:38
1 Answers
$U-I$ is a normal matrix so $||U-I||_{op}$ is its eigenvalue with the largest magnitude. The eigenvalue equation for this matrix is $$(U-I)|\psi\rangle=\lambda|\psi\rangle,$$ so $$|\lambda|^2=(\cos\phi-1)^2+\sin^2\phi=4\sin^2\frac{\phi}{2}\Rightarrow |\lambda|=2\left|\sin\frac{\phi}{2}\right|,$$ where $e^{i\phi}$ is some eigenvalue of $U$.
Now, the distribution of eigenvalues for a random unitary matrix is known: there are $n$ eigenvalues $e^{i\phi_n}$ each distributed over $-\pi\leq \phi_n\leq \pi$ with probability density $$p(\phi_1,\cdots,\phi_n)=\frac{1}{n!(2\pi)^n}\prod_{1\leq j< k\leq n}\left|e^{i\phi_j}-e^{i\phi_k}\right|^2=\frac{2^n}{n!\pi^n}\prod_{1\leq j< k\leq n}\sin^2\frac{\phi_j-\phi_k}{2}.$$ So now we can express the desired quantity as $$\mathrm{Pr}(||U-I||_{op}\leq\epsilon)=\frac{2^n}{n!\pi^n}\int_{-\pi}^{\pi} d\phi_1\cdots d\phi_n \prod_{1\leq j< k\leq n}\sin^2\frac{\phi_j-\phi_k}{2}\left(2\max\{\left|\sin \frac{\phi_1}{2}\right|,\cdots,\left|\sin \frac{\phi_n}{2}\right|\}\leq \epsilon\right).$$
For the maximum to be less than or equal to $\epsilon/2$, all of the quantities must be less than or equal to $\epsilon/2$. This happens when $|\phi_i|\leq 2\arcsin{\frac{\epsilon}{2}}$, so the result is $$\mathrm{Pr}(||U-I||_{op}\leq\epsilon)=\frac{2^n}{n!\pi^n}\int_{-2\arcsin\frac{\epsilon}{2}}^{2\arcsin\frac{\epsilon}{2}} d\phi_1\cdots d\phi_n \prod_{1\leq j< k\leq n}\sin^2\frac{\phi_j-\phi_k}{2} .$$
Right now I can't evaluate that for arbitrary $n$... the integrals can all be done analytically but they are tedious. For $n=2$ the answer is $$-\frac{4\epsilon^2}{\pi^2}+\frac{\epsilon^4}{\pi^2}+\frac{16}{\pi^2}\arcsin^2\frac{\epsilon}{2}\approx \frac{4\epsilon ^4}{3\pi^2}+O\left(\epsilon ^{6}\right),$$ for $n=3$ the answer is $$\frac{1}{8\pi^3} \left(\left(\epsilon^6-8 \epsilon^4+28 \epsilon^2-48\right) \epsilon^2 \sin ^{-1}\left(\frac{\epsilon}{2}\right)+\left(\epsilon^4-6 \epsilon^2+8\right) \sqrt{4-\epsilon^2} \epsilon^3+64 \sin ^{-1}\left(\frac{\epsilon}{2}\right)^3\right)\approx \frac{4 \epsilon^9}{135 \pi ^3}+O\left(\epsilon^{11}\right),$$ and for $n=$ the answer is (I haven't simplified the trig terms here but they're doable) $$\frac{96 \sin ^3\left(2 \sin ^{-1}\left(\frac{\epsilon}{2}\right)\right) \sin ^{-1}\left(\frac{\epsilon}{2}\right) \left(-5 \epsilon^2+2 \cos \left(6 \sin ^{-1}\left(\frac{\epsilon}{2}\right)\right)+10\right)+4 \epsilon^2 \left(\epsilon^2-4\right) \left(2 \epsilon^8-16 \epsilon^6+53 \epsilon^4-84 \epsilon^2+108\right) \sin ^{-1}\left(\frac{\epsilon}{2}\right)^2+1152 \sin ^{-1}\left(\frac{\epsilon}{2}\right)^4+\sin ^4\left(2 \sin ^{-1}\left(\frac{\epsilon}{2}\right)\right) \left(-172 \cos \left(4 \sin ^{-1}\left(\frac{\epsilon}{2}\right)\right)+\cos \left(8 \sin ^{-1}\left(\frac{\epsilon}{2}\right)\right)-45\right)}{1152 \pi ^4}\approx \frac{\epsilon^{16}}{23625 \pi ^4}+O(\epsilon^18).$$
- 4,244
- 3
- 25
-
1Why do you think that for a Haar random unitary the distribution of its eigenvalues is a simple product of uniform distributions? The paper https://arxiv.org/abs/1506.07259 has the exact formula (eq. 3) for that distribution. Those angles are not independent. Also check https://en.wikipedia.org/wiki/Circular_ensemble – Danylo Y Jun 03 '21 at 22:56
-
Oh right of course.. also I forgot to take square roots... will see what I can salvage, otherwise will remove – Quantum Mechanic Jun 04 '21 at 02:21