Definitions: Sorry for the ad hoc terminology -- comments or answers that provide pointers to standard terminology would be much appreciated. For simplicity I'd like to restrict discussion to real-valued random variables.
"mean-parameterizable model" - a statistical model $\mathcal{M}$ (family of probability distributions) such that, given a value $\mu$, there exists at most one distribution $\mathbb{P}_{\mu} \in \mathcal{M}$ with mean $\mathbb{E}_{\mathbb{P}_{\mu}} X = \mu$. (And the mean exists for all distributions, hence there is a one-to-one correspondence between possible means $\mu$ and possible distributions $\mathbb{P}_{\mu}$.)
Examples: Gaussian distributions with a fixed variance, binomial distributions with fixed number of trials.
"concentration function" - given a fixed probability distribution $\mathbb{P}_{\mu}$ with mean $\mathbb{E}_{\mathbb{P}_{\mu}} X = \mu$, the function $c_{\mu}(y) = \mathbb{P}_{\mu}(|X-\mu| \le y)$, basically if $X \sim \mathbb{P}_{\mu}$ then the CDF of the random variable $|X - \mu|$.
"invariant concentration function" - a mean-parameterizable model $\mathcal{M}$ has an invariant concentration function if, for all distributions $\mathbb{P}_{\mu} \in \mathcal{M}$ (all possible values of mean $\mu$) the concentration function $c_\mu$ is always the same function $c$, i.e. for all $\mathbb{P}_{\mu_1}, \mathbb{P}_{\mu_2} \in \mathcal{M}$ we have that $c_{\mu_1}(y) = c_{\mu_2}(y)$ for all values of $y$.
"translation-invanriant (model) - a statistical model $\mathcal{M}$ such that for any $\mathbb{P} \in \mathcal{M}$, and any constant $r \in \mathbb{R}$, if $X \sim \mathbb{P}$, then the distribution of $X + r$ is also in $\mathcal{M}$. Non-example: binomial models.
Question: Are there any other statistical models (for real-valued random variables) that are mean-parameterizable and that have invariant concentration functions, but that aren't translation invariant?
Context: Let $\mathcal{N}(\cdot, \sigma^2)$ denote the family of all Gaussian distributions with variance $\sigma^2$ for some $\sigma \ge 0$ (i.e. letting Dirac deltas be "Gaussians with zero variance). We know that for any constant $r \in \mathbb{R}$, and that for any random variable $X \sim \mathcal{N}(\mu, \sigma^2)$, that the random variable $X + r \sim \mathcal{N}(\mu + r, \sigma^2)$, hence $\mathcal{N}(\cdot, \sigma^2)$ is translation-invariant.
For any random variable $X$, the family of distributions of $X + r$ for $r \in \mathbb{R}$ will clearly be mean-parameterizable, have invariant concentration functions (the same as that of $X$), and be translation-invariant. $\mathcal{N}(\cdot, \sigma^2)$ is a special case.
Motivation: At least for models with distributions that are symmetric about their means, the concentration functions will determine the width of confidence intervals for the sample mean, with the value of the concentration function is related to the confidence level.
In applications, one commonly gets the request to determine the required sample size based solely on (i) a fixed confidence level and (ii) the desired width of the confidence interval.
These requests seem to assume implicitly that the sample size will not also depend on the specific (point) null hypothesis for what the population mean is. (I.e. that the model has an invariant concentration function.)
This is especially vexing to me because this null hypothesis in practice seems to be determined after the sample is already acquired, i.e. that the sample mean equals the population mean. But obviously we don't know the value of the sample mean before we have even determined the sample size.
So basically I am curious how common / typical (outside of Gaussians) this invariant concentration function property is, or if it's actually rather unusual outside of Gaussians. E.g. the property seems to be inapplicable for tests of proportions based on binomial or hypergeometric models, and also inapplicable to exponential distributions.
Related questions:
Gaussian-to-gaussian transformations
Normal Distribution Existence Non-affine Invariant Transformation?