Whether a given formulation is faster / more stable than another depends on the software you use to solve either.
What is the intuition behind it that a SOCP formulation can be solved more efficiently?
Here are a few reasons. For more details, you can have a look at the Mosek modeling cookbook or the seminal book of Ben Tal & Nemirovskii, Lectures on Modern Convex Optimization.
Cones do not require derivatives.
If you write a constraint $\sqrt{x^{2} + y^{2}} \leq z$ an give it to, say, Ipopt (which is uses derivatives), it may fail when $x, y, z$ are zero (because the square root is non-differentiable at zero). On the other hand, conic solvers do not work with derivatives, and will not fail.
Second-Order Cones are convex.
Consider the constraint $x^{2} + y^{2} \leq z^{2}, z \geq 0$. The quadratic part $x^{2} + y^{2} - z^{2} \leq 0$ is not convex, and some solvers may fail to detect that (in general, it's NP-hard to determine whether a given function is convex over its domain). If you write that same constraint as an SOC, you know intrinsically that it is convex.
In fact, if your problem is convex, there's a 99.99% chance you can write it as a conic optimization problem. You can look at the Mosek modeling cookbook and its conic modeling cheatsheet for such examples.
Conic duality
The dual of a (convex) conic optimization problem is also a convex conic optimization problem and, in most cases of interest, we know how to write it. That is not true in general for convex optimization problems (for instance, the Lagrangian dual, in general, is non-convex).
Thus, efficient algorithms exist for the resolution of conic optimization problems. They include, among others, interior-point methods and the Alternating Direction Method of Multipliers (ADMM). The former are implemented in, e.g., Mosek, Sedumi, ECOS; the latter in OSQP or SCS.
Why does SOCP give relatively stable solutions (i.e. if I change parameters the maximum of the objective function does not vary significantly, but remain rather stable).
This behavior depends on which algorithm you use to solve your problem.
Interior-point methods are likely to give you a similar solution if you do not change the parameters much. Others may behave differently, especially if your problem has multiple optimal solutions.
Why doesn't the solver transform the problem automatically into a SOCP one, if SOCP is so much better?
Several people advocate for doing so.
- MOSEK will always reformulate everything using conic formulations.
- Disciplined Convex Programming tools such as CVX, cvxpy, etc. allow to formulate a problem using convex optimization paradigms, and will reformulate it as a conic optimization problem under the hood.
However, it only makes sense to use a conic formulation if you have a conic solver at hand. For instance, Ipopt does not support conic constraints.
Is a SOCP formulation always better (i.e. meaning are there instances where one is better off not transforming the constraints into a SOCP)?
My general experience is that conic formulations tend to be faster and numerically more robust. Thus, I would recommend using conic formulations when possible. Sometimes doing so helps realizing that a problem is actually convex (which is a good thing).
For completeness, here are a couple of counter-examples to this rule:
If your problem has linear constraints and a convex quadratic objective, you can solve it using the simplex algorithm. Doing so can be advantageous if, e.g., you need to warm-start. This is particularly true when solving Mixed-Integer Quadratic Programs, or when solving multiple instances that do not vary too much.
Formulating a quadratic constraint (or objective) as a SOC requires computing a matrix factorization. Sometimes, this may result in much denser problems, which are then harder to solve.
The converse can also happen: if you have a quadratic objective $x^{T}Qx$ where $Q = M^{T}M$ for some low-rank matrix $M$, then an SOC formulation may be much faster (though you could also introduce $y = Mx$ and minimize $y^{T}y$).