5

[edit] I am not necessarily focused on topological error correction (as the answer already provided reasons in this context). Even in the case of topological error correction I am not sure to understand the answer provided (look at comments below the answer).

Note: I am familiar with concatenated code, I just have very rough basics in surface code theory (which must be why I struggle to understand the already given answer).

[edit2]: Most of my questions are answered in the comments of the answer from Craig Gidney.


If I want to implement an error protected algorithm (I assume my gateset is Clifford+T), I typically need to know.

  1. The total number of Clifford gates it requires including the logical identities
  2. The total number of $T$ gates

However, when you look at papers doing resource estimations for algorithms, it frequently occurs to not have access to all those information.

For instance, many examples in the references given here do not provide the total number of Clifford gates.

I totally understand why the number of $T$ gates is important. Those gates are typically the ones for which you need to do magic state distillation and are the most costly in terms of physical resources.

But when you do error correction:

  1. Every gate will be noisy: you need to know the total number of logical gates in your algorithm (to know what is the logical error probability you should target, and hence the number of physical qubit per logical ones)
  2. Even if a given $T$ gates requires $\times 100$ more resources than a Clifford one, if you have $\times 1000$ more Clifford gate in your algo, then the dominant cost will be due to the Clifford operations.
  3. Probably even worse: you might have a lot of identity operation occuring on your qubits. If you look at figure 7 of this paper, the identity operation completely dominate the number of logical gate.

It leads me to the following questions:

  1. Why frequently the total number of Clifford gates is being ignored in papers doing resource estimation of algorithms.
  2. Are the logical identities included in the Clifford gate count?

For 1: If there is an argument saying that typically the number of $T$ gates is $\gg$ than the number of Clifford, I would be interested to hear about it.

For 2: I know that a logical identity is mathematically a Clifford operation. But I am not sure if they are taken into account in the counting in practice. As at some point qubits necessarily have to wait, you can be somehow "forced" to apply an identity. It could occur conceptually that the number of identities completely dominates the rest.

Marco Fellous-Asiani
  • 1,514
  • 2
  • 13
  • 33

1 Answers1

4

In a topological code where you're doing gates by code deformation, there's no real distinction between idling and doing Clifford operations. Furthermore, Clifford operations can be sequenced through space instead of through time so they can't really slow down the execution of the algorithm. So you might as well just go slightly lower level and focus on how many logical qubits you are protecting for how many surface code cycles while the non-Cliffords are worked through.

You estimate space usage by knowing how many logical qubits are needed and how much overhead is needed for things like routing and how many magic state factories you're going to be running in parallel.

You estimate time by knowing how many non-Cliffords operations you have, and how long it will take for the factories to provide all of them (and also how long the serial chains can be vs the reaction time of the control system).

With those two quantities you can know how many logical-qubit-rounds need to be protected. For example, if you're gonna run for an hour and you do one surface code cycle per microsecond and there are 1000 logical qubits then the algorithm has roughly 10 billion logical qubit cycles to protect. So you pick a code distance that suppresses error rates below 1 in 10 billion.

For example, here is a figure of activity over ~150 microseconds of an addition from https://arxiv.org/abs/1905.09749:

enter image description here

The red boxes are magic state factories for Toffoli gates. The stuff outside the red boxes is qubits sitting idle waiting to be operated on. Just outside the red boxes you can see them being routed into the hallways between the red boxes. The stuff between the red boxes is a bunch of routing, a bunch of conditional corrections for the gate teleportation through the magic states, and, hidden away inside each of the blue boxes, contributing almost nothing... the two Clifford gates per ripple carry step from Cucarro's adder.

This is why only the non-Clifford gates matter. Because all the ceremony around getting them done so often determine how the computation is laid out. The Cliffords from the original circuit are almost after-thoughts in comparison. Storage time of idle qubits does matter, but you determine that time based on how long the non-Clifford operations take.

To be extremely concrete: if you are counting Cliffords before you start counting routing overhead or storage overhead, you are almost certainly doing things in the wrong order. Routing overhead is way more significant than Clifford gates, and it is completely invisible in the original circuits before being compiled down into the surface code.

Craig Gidney
  • 36,389
  • 1
  • 29
  • 95
  • Thank you for your answer. I am not sure to get the point. If I have 1 billion of clifford (including identity) and only 1 T gate, I expect to need much more overhead per logical qubit than if I had 1 Clifford and 1 T gate (here no QEC would be needed) Maybe we can put the Clifford in space and not in time but it wouldnt change this fact. Could you elaborate on this? – Marco Fellous-Asiani Feb 02 '22 at 20:50
  • 1
    @StarBucK if you have a circuit like that, you bigger problems. Poorly optimized logical circuits have at most n Cliffords per non-Clifford, where n is the number of qubits in the system. Well optimized ones have maybe 10-20 Cliffords per non-Clifford. When we do estimates we are just usually in a regime where counting non-Cliffords is sufficient to get a rough idea. The uncertainties on the error rates are so large you can't do any better than rough anyways. – Craig Gidney Feb 02 '22 at 21:42
  • A specific example of a Clifford heavy circuit is QROM circuits. They have n cnots per toffoli. But then in context you find that every output is touched by a non-Clifford operation so it gets amortized back down to constant Clifford per toffoli. – Craig Gidney Feb 02 '22 at 21:45
  • Thanks a lot for this precision. Is it a theoretical result that claims that if you optimize properly you will never have more than $n$ clifford per non clifford? If so I would be very interested to know a reference or the name of this theorem. In summary, in your mind, only reasoning on the total number of non Clifford gates gives a rough estimation of the total number of logical gate (Clifford, including the identity, and non Clifford) you can expect. Is it the somehow "take home" message you would give. – Marco Fellous-Asiani Feb 02 '22 at 21:51
  • @StarBucK https://arxiv.org/abs/1808.02892 achieves n single qubit cliffords per T gate. – Craig Gidney Feb 03 '22 at 04:57
  • Ok thanks. So what you say is only applicable to topological code then. For other ones (if you concatenate steane for instance) I dont see how you could escape the clifford counting. Even if you have "only" n clifford per non clifford. For n=1000 it pretty significant. – Marco Fellous-Asiani Feb 03 '22 at 09:55
  • Actually even for topological code I am not sure to get the point. If I take the example of quantum adders (probably an old implementation): https://arxiv.org/ftp/quant-ph/papers/0206/0206028.pdf the number of logical identity completely dominates the number of other gates (look at figure $7$, it roughly scales as $n^2$ while non clifford as $n$). Only reasonning on the non clifford gate could miss an important (dominating?) part of the computation which is simply storing qubit in memory because circuits cannot be compactified further. And we need to do QEC to do identities. – Marco Fellous-Asiani Feb 04 '22 at 10:06
  • @StarBucK I am not saying you don't count storage costs. I am saying that in an estimate, the storage costs are a function of the arrangement of non-Clifford gates. You can't estimate storage costs without estimating non-Cliffords, and once you've estimated non-Cliffords getting the storage costs are a nearly trivial additional step. The non-Cliffords are the meat of the estimate, which is why papers focus on them. – Craig Gidney Feb 04 '22 at 10:26
  • Ok I see. So "in the reasonning" you start by arranging the non-Clifford element. Once you have done this you can deduce where storage occurs. In the end it could totally occur that the memory cost is actually dominating by far any other cost (such as magic state distillation for non-Clifford), but this is something you only asses "in the end" of the reasonning. Would you agree? – Marco Fellous-Asiani Feb 04 '22 at 10:33
  • I guess that the number of identity in most sequential circuit will scale as the number of qubits $\times$ the depth. – Marco Fellous-Asiani Feb 04 '22 at 10:34
  • 1
    @StarBucK Yes, we just estimate it as depth * qubit count. This also usually accounts for the Clifford operations, since the non-Cliffords are often the limiting factor for depth. – Craig Gidney Feb 04 '22 at 10:36
  • Thank you very much. You answered my question the comments hence I validated your answer. I would have a very last quick question regarding you recent paper if its fine for you. You call "measurement depth" being "The length of the longest chain of dependent measurements". I am not entirely clear of what it means. I wondered if it is simply equivalent to the full algorithm depth: the number of logical timestep between the initialization of the first qubit to contain information and the measurement of the last qubit containing info (it seems so from the refs you are citing, but as you used[...] – Marco Fellous-Asiani Feb 04 '22 at 14:09
  • a different wording I am not entirely sure. – Marco Fellous-Asiani Feb 04 '22 at 14:10
  • 1
    @StarBucK It does end up being the overall depth. – Craig Gidney Feb 04 '22 at 19:12
  • Wonderful. Thanks a lot for all your help!! – Marco Fellous-Asiani Feb 04 '22 at 19:51