Without knowing more about $f$ and $g$, there really isn't going to be a clear winner here. Sometimes, the regularized form is going to be easier; sometimes, the constrained form is going to be easier.
Let me dispatch some easier cases. First of all, if both $f$ and $g$ are smooth and have readily computable derivatives, then I'll definitely give the edge to the regularized form, because that will give you an unconstrained smooth problem. If you're using an interior-point method to solve either problem, then there should be no consistent advantage for either form.
Here's an interesting case. Consider $f$ smooth, with a readily computable gradient. Consider two different possibilities for $g$:
$$g_1(x)=\|x\|_\infty \quad\text{and}\quad g_2(x)=\|x\|_1$$
Which will be easier: the regularized form or the constrained form?
I argue the answer is not the same for both.
A first-order method is going to be a logical choice here, which will involve alternating between standard gradient steps on $f$ and simple nonsmooth minimization steps on $g$. For the regularization form, the $g$ steps will be prox minimizations
$$
x = \mathop{\text{arg}\,\text{min}}_z g(x) + \tfrac{1}{2} \|x^{(+)}-z\|_2^2.
$$
For the constrained form, the $g$ steps will be projections
$$
x = \mathop{\text{arg}\,\text{min}}_{z: g(z)\leq \epsilon} \tfrac{1}{2} \|x^{(+)}-z\|_2^2.
$$
In each case, $x^{(+)}$ is the point obtained from taking the gradient step.
We implemented both projections and both proximity functions in our software TFOCS. It turns out that for $g_1(x)=\|x\|_\infty$, the projection is the easiest to implement; and for $g_2(x)=\|x\|_1$, the prox minimization is the easiest. In these two easy cases, the problem decouples---each variable can be determined separately. In the opposite cases---prox minimization for $g_1$, projection for $g_2$---there is coupling across variables.
I believe our implementations are $O(n)$ for the easier cases, and $O(n\log n)$ for the harder ones. To be fair, neither cost will be significant if you're solving a problem on a single machine where the gradient steps dominate. But what if you're solving a very large-scale problem on a cluster, using distributed computation to compute the gradient? Then you're going to prefer the decoupling that the $\|x\|_\infty$ projection and the $\|x\|_1$ prox provide.
One case I haven't considered is the one where $f$ is nonsmooth and $g$ is smooth. If the regularized form is simple to compute---because $f(x)$ admits a simple prox or projection---then yes, I'd give the edge to that form. But if you can compute the dual of the constrained problem, it is likely to take on a regularized form as well! So there is a real possibility that you could construct an algorithm to solve the constrained form with similar efficiency.
So in summary, it really depends on the specific characteristics of $f$ and $g$, the kinds of algorithms you're willing to deploy, and how much work you're going to do to extract efficiency. (EDIT: I'm assuming convex problems throughout here. But if $f$ is non-convex but smooth and $g$ is non-smooth and convex, then most of what I've said above about first-order methods will still apply! Of course you can't guarantee global convergence anymore.)