4

What do they mean when saying that a certain value should be “super-logarithmic”?

I've found the Wikipedia definition of a “super-logarithm”, but I'm having trouble understanding how a given value can be super-logarithmic (in some security parameter).

As for an example of the use of the terminology, see http://research.microsoft.com/en-us/um/people/yael/publications/2010-Symmetric_Encryption.pdf. It appears right in the abstract.

e-sushi
  • 17,891
  • 12
  • 83
  • 229
giddy
  • 43
  • 4

1 Answers1

9

The term super-logarithmic in the paper you cite has nothing to do with the notion of a super-logarithm in Wikipedia. Rather, the intention is simply a function that is asymptotically larger than the $\log$ function. Formally, $f$ is super-logarithmic if $f(n)=\omega(\log n)$. The formal definition of "little-omega" appears in the Wikipedia entry on big-O notation.

Yehuda Lindell
  • 27,820
  • 1
  • 66
  • 83
  • That would mean the simple $f(x)=x$ function is super-logarithmic, right? – Medinoc Jul 21 '16 at 10:03
  • @Yehuda how would one translate the notion of super-logarithmic to concrete terms. Meaning say you wanted concrete security with 128 bit security. What would be a plausible value of something that needed to be super-logarithmic? – giddy Jul 21 '16 at 17:42
  • @Medinoc Certainly: a linear function is super-logarithmic. Usually, we refer to this regarding the security parameter. – Yehuda Lindell Jul 21 '16 at 17:58
  • 1
    @giddy Typically, when you set some value $v$ to be super-logarithmic, what you are looking for is either that $2^{-v}$ be negligible or $2^v$ be out of the reach for computation. Now, when going concrete, it really depends on what exactly you mean. For example, if you are repeating a zero-knowledge proof with soundness $1/2$ until you get negligible error, then you could set $v=40$ or $v=80$ depending on what probability of error you can live with. When you need a time bound that is beyond the running time of the adversary then it is accepted to take $v=128$ today. – Yehuda Lindell Jul 21 '16 at 18:02
  • @YehudaLindell Just as a follow-up based on what you said -- why would the bounds be lower for negligible soundness for ZKPs? If the ZKP is part of a larger protocol (say a signing algorithm), and you want 128-bit security, wouldn't you need to set $v=128$ for the (NI)ZKP soundness as well (e.g. assuming that breaking the ZKP would allow forging)? Or is it acceptable to have lower concrete bounds for ZKP soundness? – giddy Jul 21 '16 at 18:11
  • 1
    The answer is that there is a difference between statistical error and computational hardness. – Yehuda Lindell Jul 21 '16 at 18:33
  • @YehudaLindell But for non-interactive ZKP's are they not the same? With enough computational power, you can brute force to find a proof that contains the statistical anomaly. (edit: the website recommends that I start a chat instead of continuing here, but sadly I do not have enough reputation to do that). – giddy Jul 21 '16 at 18:44
  • I was referring to interactive ZKP. For non-interactive, you are indeed correct. – Yehuda Lindell Jul 21 '16 at 18:56
  • @giddy : ​ ​ ​ Yes. ​ The point is interaction vs. local computation - We're more comfortable with smaller bounds on how much the adversary can interact with honest parties than on how much computation the adversary can do on its own. ​ (You might be thinking of applying Fiat-Shamir to a public-coin ZKP, but I'm not aware of anything for which the result seems at-all likely to remain a proof rather than becoming just an argument.) ​ ​ ​ ​ –  Jul 21 '16 at 22:29