Suppose we have a prior $p(\theta)$ and a likelihood function $L(\theta|x)$, and that the likelihood $L(\theta|x)$ is intractable somehow (difficult or impossible to compute) and we instead replace it with an approximate likelihood $\tilde{L}(\theta|x)$ and use the approximate posterior $\tilde{p}(\theta|x) \propto p(\theta) \tilde{L}(\theta|x) $ to conduct inference (for example, calculate the posterior mean of $\theta$). Does this violate the likelihood principle?
Intuitively it seems to violate the likelihood principle because you're not conducting inference under the likelihood $L(\theta|x)$ anymore (though I'm not sure how to prove this formally). However, in reality don't you 'choose' (a fairly arbitrary) likelihood function anyway (because you don't actually know what the true data generating model is), so what's stopping you from calling the approximate likelihood $\tilde{L}(\theta|x)$ the 'real' or 'original' likelihood?
For example, suppose we know for a fact that we have data $x$ generated from binomial distribution $L(n,p|x)=B(n,p)$, and I choose some (subjective, not dependent on the data) prior $p(n,p)$ on the model parameters. Suppose however that I replace $L(n,p|x)$ with $\tilde{L}(n,p|x)=\mathcal{N}(np,np(1-p))$ (a normal distribution approximate, discretised if necessary if a continuous distribution is a problem) and conduct inference using the posterior $\tilde{p}(n,p|x) \propto p(n,p) \tilde{L}(n,p|x)$. Would I be violating the likelihood principle in this case?
Would I still be violating the likelihood principle in the above example, if I didn't know $B(n,p)$ was the true model?