Functional data structures, or immutable data structures, are often achieved by copying old data to new data upon operation. Naively, it looks much less efficient than their imperical counterpart. However, there are techniques for efficiency (nb. immutable ones cannot be more efficient than imperical ones).
One technique, for example, is to implement a functional seqeunce with balanced tree. Then, on updating an entry of a functional sequence, only $O(log(n))$ entries need to be copied.
Another technique, which seems to be the main selling point of Chris Okasaki's sork, is to use laziness. Detailed in his book [1], however, I find it hard to understand as I'm not familiar with either ML or Haskell. Hence I'd like to try my luck here and see if anyone can explain how laziness helps implementing efficient immutable data structures. An example with a basic structure like (functional) sequence would suffice.
Reference
- [1] Purely Functional Data Structures-[Chris Okasaki]
- [2] (related) What's new in purely functional data structures since Okasaki?