When decoding data JSON payloads with a lot of repeated structure, for example a long list of objects with the same keys, does interning the object keys (or values) improve performance? I understand this may depend on many factors, such as language/runtime and structure of the data; I'm interested in anyone's experience, research, or results on the topic.
For example, I can imagine that by interning JSON object keys, fewer unique object instances may be created, so fewer allocations are made, perhaps improving runtime performance and reducing GC pressure. Is this intuition correct? Do any JSON decoders try this strategy?