In Bayesian inference, the term $P(D|H)$ is sometimes called the likelihood, as in the example below from Olshausen (2004):
$$ P(H|D) = \frac{P(D|H)P(H)}{P(D)} $$
The term $P(D|H)$ is called the likelihood function and it assesses the probability of the observed data arising from the hypothesis...
However, in his introductory paper to the term likelihood, Etz (2018) states the following:
A critical difference between probability and likelihood is in the interpretation of what is fixed and what can vary. In the case of a conditional probability, $P(D|H)$, the hypothesis is fixed and the data are free to vary. Likelihood, however, is the opposite. The likelihood of a hypothesis, $L(H)$, is conditioned on the data, as if they are fixed while the hypothesis can vary. The distinction is subtle, so it is worth repeating: For conditional probability, the hypothesis is treated as a given, and the data are free to vary. For likelihood, the data are treated as a given, and the hypothesis varies.
In other words, Etz (2018) uses $P(D|H)$ as an example of a conditional probability that is 'the opposite' of likelihood. Assuming this is correct, why is $P(D|H)$ still often called the likelihood in Bayesian inference? Is this incorrect?