0

Functional data structures, or immutable data structures, are often achieved by copying old data to new data upon operation. Naively, it looks much less efficient than their imperical counterpart. However, there are techniques for efficiency (nb. immutable ones cannot be more efficient than imperical ones).

One technique, for example, is to implement a functional seqeunce with balanced tree. Then, on updating an entry of a functional sequence, only $O(log(n))$ entries need to be copied.

Another technique, which seems to be the main selling point of Chris Okasaki's sork, is to use laziness. Detailed in his book [1], however, I find it hard to understand as I'm not familiar with either ML or Haskell. Hence I'd like to try my luck here and see if anyone can explain how laziness helps implementing efficient immutable data structures. An example with a basic structure like (functional) sequence would suffice.

Reference

Student
  • 127
  • 2
  • 1
    Consider $(\lambda xyz.17)\ L\ M\ N$. In a call-by-value scheme, all arguments, $L, M, N$ are evaluated even though they are not used -- wasteful. Laziness does not do this. Similarly, consider $(\lambda x. x+x+x+x)M$: In lazy and call-by-value evaluation, M is evaluated exactly once. – Martin Berger Oct 29 '22 at 12:29
  • 2
    This question would be more suitable for cs.stackexchange.com/. – Martin Berger Oct 29 '22 at 12:29

0 Answers0