Theoretically, why do we not need to compute a marginal distribution constant for finding a Bayesian posterior?
Generally speaking, you do need to - it's just that sometimes it's so easy that you might not notice you did it.
With 'textbook' problems you can often take $\pi(x|\theta) \propto L_x(\theta)\pi(\theta)$, then play about with the result and recognize the density function, at which point you've computed what the normalizing constant must have been - the thing required to scale your $L_x(\theta)\pi(\theta)$ so it integrates to 1. Since it's a pdf you know it integrates to 1, and since it's proportional to $ L_x(\theta)\pi(\theta)$, you know you have divided by the integral of that.
With cases where that doesn't work there are often a few choices.
One is numerical integration - you can integrate $ L_x(\theta)\pi(\theta)$ to work out the normalizing constant. So then you can compute expectations, and so on.
Another is sampling; maybe you can't find the integral but you can bound it and use rejection smapling, or approximate it and use Metropolis-Hastings etc. With a sample from the posterior, you can again find means or other quantities as needed, or get a good approximation to the density or the cdf.
There are other approaches.