I'm not sure how you are using an MCMC sampler without a prior specified, since any implementation I've seen of an MCMC sampler requires the "joint distribution", i.e. likelihood x prior.
Anyhow, maybe I can try and clear some things up. Let's use some (somewhat informal) notation for the pieces of your problem / experiment. You have some data $x_1, ..., x_n$ that are realizations of random variables $X_1, ..., X_n \overset{iid}{\sim} N(x|\mu, \sigma^2)$. That is, the data is drawn from a normal distribution with unknown mean $\mu$ and unknown variance $\sigma^2$.
In and of itself, saying "this data I have was drawn from some distribution" is an assumption (although I assume you're synthetically creating the data so you actually know the true data generating process).
This assumption has actually determined what some call an "observation model" or what some call a likelihood function. Using this function, we can measure how likely our observed data is to have been generated by our model given particular values of the parameters?
Maximum likelihood basically looks at all the particular values (states) of the parameters ($\mu$ and $\sigma^2$ in this example) and finds the ones that best explain our data, or give us the highest value when plugged into the likelihood function with our fixed dataset.
In Bayesian methods we don't just want particular values of the parameters that best explain the data, but a distribution over the parameters, where the probability of a particular setting of the parameters is weighted by how well it explains the dataset we have.
How do we create a distribution over parameters? We use Bayes' theorem:
$$
p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)}
$$
where $p(x|\theta)$ is the likelihood function we spoke of before and $p(\theta)$ is a prior of the parameters. Thus, to get a distribution over parameters we need to specify a prior so that we can then compute the posterior over parameters using Bayes' theorem.
See this other post for more elaborate discussion of maximum likelihood, maximum a posteriori, and Bayesian inference in the setting of conditional models like regression.