Monte Carlo estimate of a random variable $X$ with a sample of size $N$ has variance of $\tfrac{\text{Var}[X]}{N}$ and thus a "typical error" of this estimate has an order of $\tfrac{\text{std}(X)}{\sqrt{N}}$. If you want to estimate $\mathbb{E} X$ with a high degree of certainty, you should make this variance as small as possible, which means $N$ has to scale with the variance $\text{Var}[X]$. It should also be noted that it's the variance of the outcome that matters, not the individual variances of the random variables involved (although they do influence the outcome, but how they do it is up to you).
So what happens if we consider some $f(X, Y)$ that has the same expectation $\mathbb{E} X = \mathbb{E} f(X, Y)$ but employs auxiliary variables $Y$? What would happen to the variance $\text{Var}[f(X,Y)]$?
First, a major result here is the Rao–Blackwell theorem
, which states essentially that $$\text{Var}[\mathbb{E}_{Y|X} f(X, Y)] \le \text{Var}[f(X, Y)]$$ That is, if you can average the auxiliary variable $Y$ (conditioned on $X$) out, it might only improve (i.e. decrease) the variance.
However, the RB theorem only applies to cases where $X = \mathbb{E}_{Y|X} f(X,Y)$, for example, $f(X,Y) = X + Y$ for $Y|X \sim \mathcal{N}(0, 1)$. Then, indeed, the expectation is the same, but adding more noise won't decrease the variance. But the Monte-Carlo itself is a demonstration of employing more random variables to reduce the variance: $f(X, Y) = \frac{X+Y}{2}$ for $X$ and $Y$ being independent and identically distributed. How did we escape the RB theorem? Its because $X$ is not equal to $\mathbb{E}_{Y|X} f(X, Y)$, which is actually $\frac{X}{2} + \frac{\mathbb{E} X}{2}$. The later would indeed be a better estimate of the mean, but we can't compute $\mathbb{E}_{Y|X} f(X, Y)$ in this example.
Therefore, we can see that constructing unbiased estimators with more random variables and smaller variance is possible, the RB theorem just points out some cases where such attempts would fail.
Okay, how do you actually construct such estimators? Besides the Monte Carlo method itself (which uses more computation to reduce the variance), one way that comes to mind is the antithetic variates method, which uses negative correlation so that randomness in $X$ and $Y$ "cancel" each other out (essentially, the method exploits problem structure, but it's not always possible/easy to find and utilize it).
In general, the idea of Variance Reduction in Monte Carlo simulations is a very important one, and there are other methods beyond increasing sample size or introducing extra random variable.