I'll comment on both the idea of a "true parameter/parametric model" and a "true prior" as the question is somewhat ambiguous about which one of these is of interest here.
First regarding frequentism, it is true that analysing data based on parametric frequentist modelling will assume that there is a true model and a parameter value, which is what frequentist inference makes statements about.
This however does not mean that anybody who uses such methods needs to believe that these models and any parameter value are really true in reality. We use methods that are justified by and derived from artificial formal models, which are always an idealisation of reality and should therefore not be called "true" in reality. The fiction of a true parameter allows us to analyse mathematically the characteristics of our inference, and this is pretty much the best justification for such methods that we can get, for which reason we use such models, but this doesn't imply "belief". I think that any proper interpretation and discussion of results of (not only) frequentist inference need to acknowledge the fact that the models are justified within the "mathematical world", which is different from the real world.
Much of what I write here (above as well as below) has been elaborated in Hennig, C. (2023). Probability Models in Statistical Data Analysis: Uses, Interpretations, Frequentism-as-Model. In: Sriraman, B. (eds) Handbook of the History and Philosophy of Mathematical Practice. Springer, Cham. https://doi.org/10.1007/978-3-030-19071-2_105-1; free on https://arxiv.org/abs/2007.05748
Regarding the Bayesian approach, @Ben has given a good answer. Note that there is more than one interpretation of Bayesian probabilities though. De Finetti for example is very explicit on not believing in true models and parameters. According to him the parametric model is only a device to derive meaningful predictive posterior distributions. In de Finetti's sense one can interpret the posterior regarding expected future observations, but not regarding a true parameter value, as this doesn't exist. A "true prior" in this sense would be a prior that correctly expresses your personal uncertainty (or, in "objective Bayes", the uncertainty based on secured "objective" knowledge).
It has been argued though (e.g., in D. Gillies' "Philosophical Theories of Probability"; similar arguments made in Diaconis & Skyrms's "Ten Great Ideas about Chance") that having your belief modelled by a prior based on an exchangeability assumption and a parametric sampling model implies the belief that if infinitely many observations could be collected, they would in fact behave like the sampling model with a certain true parameter value, and in this sense one could correctly state that if your belief is modelled in this standard Bayesian way, you implicitly also believe in a true parameter in the sense defined above. Diaconis & Skyrms (and some other Bayesians) indeed hold that in this way the Bayesian approach actually includes frequentism, but as @Ben correctly notes, there are other differences between these schools. In particular, frequentist inference is about performance characteristics of methods given the true parameter whereas Bayesian inference is about making probability statements about that parameter and future observations (in my paper cited above I call this "compatibility logic" - frequentist inference is about whether models are compatible with the data, not about whether they are true - vs. "inverse probability logic").
Furthermore, I think that Bayesian epistemic/subjective probability is just as much an idealisation as frequentist probability is. In particular, nobody would normally really believe in exchangeability, as this does not only imply that the order of observations is irrelevant, but also that it can never be learnt, taking into account the order of observations, that in fact the order is relevant, differently from what was initially assumed (meaning that "believing" in the irrelevance of the order is not enough, you have to be 100% sure about it). So the above argument doesn't really hold as exchangeability is assumed for convenience and for having a well defined way of how to learn from the past for the future, but not because anybody would believe that it is actually true. Also a "true prior", if it even exists, may not agree with the one used for statistical analysis (for example by not assuming exchangeability).
Another aspect is that the probabilities used in Bayesian inference can also be understood in an empirical, frequentist way. In this case the sampling model is interpreted as frequentist (as said above this does not necessarily mean that we have to believe in it, however we analyse the situation as if it were true), and also the prior could refer to a frequentist distribution over true parameters in similar studies. This is advocated in several places by Andrew Gelman, also see my paper above. A key problem with this is that in order to define what the "true prior" would be, a precise definition is required what is the "reference set" of studies that qualify to be included in the population on which the prior is based. This is hardly ever given and probably very hard to specify.
A final aspect is that it can also in some situations be argued that although the parametric model is an idealisation and not literally "true", the parameter refers to something that really exists (like the quantity of certain pollution in a river, measured with uncertainty). In this way one could justify the existence of a "true parameter" without holding the model as "true" (and a prior to formalise uncertainty about that true parameter), although of course this requires to connect the "true" parameter with the model within which it is mathematically defined, which may be "philosophically hard" without assuming the model to be true as well.