In a Bayesian model, we normally have that:
$$ p(\boldsymbol\mu|\boldsymbol X) = \dfrac{p(\boldsymbol X|\boldsymbol \mu)p(\boldsymbol \mu)}{p( \boldsymbol X)} $$
Now suppose that $\boldsymbol \mu \sim N(\boldsymbol \mu_0, A)$ and that $\boldsymbol X | \boldsymbol \mu \sim N(\boldsymbol \mu, B)$. In this case, by conjugation properties, the posterior, $p(\boldsymbol\mu|\boldsymbol X) $ is also normal.
Suppose now that I want to find the marginal density of $\boldsymbol X$. Then normally we would integrate $p(\boldsymbol X|\boldsymbol \mu)p(\boldsymbol \mu)$ with respect to $\boldsymbol \mu$.
HOWEVER, another method is to just use:
$$ p( \boldsymbol X) = \dfrac{p(\boldsymbol X|\boldsymbol \mu)p(\boldsymbol \mu)}{p(\boldsymbol\mu|\boldsymbol X)} $$
and to just drop all terms not in $\boldsymbol X$, in essence, to find the kernel of $\boldsymbol X$, which should be an exponential form. After this, we just fill in the constants by way of identification of the kernel.
It appears that here this technique works. However I am wondering if in general this result holds.
My question is: What allows us to know that $p(\boldsymbol X)$ is a valid probability density function just by looking at the kernel? If the likelihood, posterior, and prior are all valid probability distribution functions summing to $1$, is it enough for me to just "fill" in constants by way of lookinga tthe kernel?