As you suggested, like other “dunder methods” in Python, __getitem__ enables the development of container objects with the features of native container types. For example, implementing __len__ in a class will allow one to pass instances of that class to the built in len. In conjunction with __setitem__ and __delitem__, the method __getitem__ allows one to perform create-replace-update-delete operations on a container:
x[0] = "bork" # calls __setitem__
y = x[0] # calls __getitem__
del x[0] # calls __delitem__
With that, a specific example that answers your question would be overriding __getitem__ to implement "lazy" dict subclasses. The aim is to avoid instantiating a dictionary at once that either already has an inordinately large number of key-value pairs in existing containers, or has an expensive/complicated hashing process between existing containers of key-value pairs, such as if the dictionary values are resources that are distributed over the internet.
Suppose you have two lists, keys and values, whereby {k:v for k,v in zip(keys, values)} is the dictionary that you need, which must be made lazy for speed or efficiency purposes:
class LazyDict(dict):
def __init__(self, keys, values):
self.lazy_keys = keys
self.lazy_values = values
super().__init__()
def __getitem__(self, key):
if key not in self:
try:
i = self.lazy_keys.index(key)
self.__setitem__(self.lazy_keys.pop(i), self.lazy_values.pop(i))
except ValueError, IndexError:
raise KeyError("%s not in map" % str(key))
return super().__getitem__(key)
This is a contrived example that makes assumptions about duplicate keys in the input. In the context of subclassing a dictionary to be lazy, always make sure to include logic for dealing with duplicate keys.
Usage:
>>> a = [1,2,3,4]
>>> b = [1,2,2,3]
>>> c = LazyDict(a,b)
>>> c[1]
1
>>> c[4]
3
>>> c[2]
2
>>> c[3]
2
>>> d = LazyDict(a,b)
>>> d.items()
dict_items([])