Not only does it make sense, it is one of the key operations in one of the biggest breakthroughs in network design of recent years, the idea of "attention" as used by, e.g. Google Translate, ChatGPT and all other GPT-based applications, Stable Diffusion, and many other recent machine learning systems.
Attention is essentially a database lookup over the values that are currently being examined by a network. For instance in the transformer architecture, the inputs to the attention layers are usually keys, queries and values, all of which are word embeddings for words in the current context of the input or that have recently been generated in the output. The attention layer then calculates dot products between each query and the keys (ie, component-wise multiplications followed by summing to a single scalar), scales them so that they total to 1, then uses those as weights to add up all the values. This produces an output embedding that is most similar to the values associated with the keys that are most similar to the queries, but contains a little of all of the values mixed in.
This has turned out to be extremely useful, allowing networks to be built that work on much larger contexts than would otherwise be possible because they are able to select only the parts of the context that are currently relevant for the output they are building. This means that tasks that previously required recurrent networks to accumulate information over a large context can now be addressed by feed-forward networks that are much simpler to train.
For more information, the paper Attention Is All You Need (Vaswani et al) is considered the best starting point.