As I've noted a number of times on this site (see e.g., here, here,
here,
here,
here) the notion that "frequentists" have some special idea of the meaning of probability is illusory. (Indeed, I put this term in quotes because I am not convinced that there is really any frequentist school of thought at all.) All statistical schools that use standard probability measures as the underlying basis for measuring uncertainty accept the validity of the laws of large numbers, and therefore all of them accept that there is a correspondence between marginal probability and asymptotic long-run frequency under appropriate circumstances. If anything, Bayesians have a more detailed view of when and why this correspondence holds, since they do not start with this correspondence as a primary, but derive it from probabilistic models based on exchangeability. This means that all Bayesians are also "frequentists", albeit that they have a more satisfying explanation for why they are "frequentists". The corresponding notion of "fairness" (essentially just probabilistic uniformity) then arises as a simple observation about long-run frequency, just as in the "frequentist" case.
The standard Bayesian account of the correspondence between marginal probability and asymptotic long-run frequency goes like this. Consider a sequence of random variables $X_1,X_2,X_3,...$ (e.g., representing the outcomes of coin tosses) generated by some mechanism that is designed to produce uniform outcomes in the long-run. If we are satisfied that the mechanism is stable and that the conditions for generation of the outcomes does not change then we might reasonably believe that the seequence of variables is exchangeable --- i.e., for any finite subset of outcomes, the probability of that subset of outcomes does not depend on its order. If we are satisfied that this is the case then it follows from the representation theorem (see e.g., O'Neill 2009) that the variables in this sequence are conditionally IID, conditional on the underlying empirical distribution. In the case of a binary sequence (such as outcomes of coin tosses) the empirical distribution is equivalent to the long-run proportions of the two outcomes, so the empirical distribution can be reduced to a single parameter giving the long-run proportion of heads (here we take tails and heads as $0$ and $1$ respectively):
$$\theta \equiv \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \mathbb{I}(X_i = 1).$$
With this simplification, the representation theorem says that the variables in the sequence are conditionally IID, conditional on $\theta$. Under this account, the law of large numbers ensures that the probability of a head in any given coin toss is equal to $\theta$. Thus, the coin is "fair" if $\theta = 1/2$. Now, Bayesians will hold an a priori view on this parameter and then they would collect data on the outcomes of the coin tosses to update their beliefs. This is a different inference method to the classical "frequentist" methods, but the notion of what a fair coin is is the same as under the "frequentist" approach. In both cases, we recognise the application of the law of large numbers, so we say that the coin is considered "fair" if the long-run proportion of heads is one-half.