7

I have a script that takes a while to simulate. I can modify it in such a way where I can use fewer qubits at a time, but it will require more iterations of manipulation. I believe this will cut down on simulation time, but does is it worse when running on an actual quantum computer? It will still run in polynomial time, regardless.

I am thinking I should go with the fewer qubits and more gates method, since I would free up qubits for others and cut my simulation time, but I would like to know which is technically the better way, computationally.

Sanchayan Dutta
  • 17,497
  • 7
  • 48
  • 110
nikojpapa
  • 501
  • 3
  • 9
  • 6
    this is probably very hardware dependant, and therefore difficult to give a proper answer to. The space/time trade-off for noise might vary significantly. – DaftWullie Nov 07 '18 at 13:51
  • 2
    Not only is it hardware dependent, it depends on whether the number of gates brings you close to the coherence times of the qubits in the particular platform or not; and on whether the number of qubits you require is close to the maximum available or not. Even a small constant factor (e.g. 3/2) can be significant for either one, if you are close to the limits of what the device can do. – Niel de Beaudrap Nov 07 '18 at 14:12
  • There can be multiple people answering with each person giving the description for their chosen hardware/noise setup. So I don't think of this as worth flagging unclear. – AHusain Nov 07 '18 at 14:13
  • 1
    @AHusain: See my comment above, it depends on what the user is trying to realise as well. It's a very broad question. – Niel de Beaudrap Nov 08 '18 at 11:52

0 Answers0