The entropy of a random variable $X$ is defined as $\mathbb{E}(-\log(f(X)))$ (where $f$ is the pdf of X, https://en.wikipedia.org/wiki/Entropy_(information_theory)).
Is there any general relationship between the entropy of $X$ and the possibility to "more or less accurately" predict the value of $X$ using some constant $a$? That is, are there known general relationships between $\mathbb{E}(\log(X))$ and measures such as
$$ \min_a \mathbb{E}[|X-a|] \tag 1$$ $$ \min_a \mathbb{E}[(X-a)^2] \tag 2$$ $$ \min_a \mathbb{E}[I(X=a)] \tag 3$$ ?
For a $Be(p)$, for example, it seems that the two kinds of metrics (entropy on the one hand, and either of (1), (2), or (3) on the other) move jointly with $p$ (i.e., if $Be(p)$ has higher entropy than $Be(p')$, then $Be(p)$ is also less accurately predictable --- higher minimum expected error --- than $Be(p')$ in terms of (1), (2), or (3)).
Is something like that true more generally? For example, is it true in general that an increase in entropy implies a decrease in predictability as measured by (1), (2), or (3)? If it's not true in general, is true for some class of random variables larger than the class of $Be(p)$?