I suspect there is in general not much difference between GMRES and CG for an SPD matrix.
Let's say we are solving $ Ax = b $ with $ A $ symmetric positive definite and the starting guess $ x_0 = 0 $ and generating iterates with CG and GMRES, call them $ x_k^c $ and $ x_k^g $. Both iterative methods will be building $ x_k $ from the same Krylov space $ K_k = \{ b, Ab, A^2b, \ldots \} $. They will do so in slightly different ways.
CG is characterized by minimizing the error $ e_k^c = x - x_k^c $ in the energy norm induced by $ A $, so that
\begin{equation}
(A e_k^c, e_k^c) = (A (x - x_k^c), x - x_k^c) = \min_{y \in K} (A (x-y), x-y).
\end{equation}
GMRES minimizes instead the residual $ r_k = b - A x^g_k $, and does so in the discrete $ \ell^2 $ norm, so that
\begin{equation}
(r_k, r_k) = (b - A x_k^g, b - A x_k^g) = \min_{y \in K} (b - Ay, b - Ay).
\end{equation}
Now using the error equation $ A e_k = r_k $ we can also write GMRES as minimizing
\begin{equation}
(r_k, r_k) = (A e_k^g, A e_k^g) = (A^2 e_k^g, e_k^g)
\end{equation}
where I want to emphasize that this only holds for an SPD matrix $ A $. Then we have CG minimizing the error with respect to the $ A $ norm and GMRES minimizing the error with respect to the $ A^2 $ norm. If we want them to behave very differently, intuitively we would need an $ A $ such that these two norms are very different. But for SPD $ A $ these norms will behave quite similarly.
To get even more specific, in the first iteration with the Krylov space $ K_1 = \{ b \} $, both CG and GMRES will construct an approximation of the form $ x_1 = \alpha b $. CG will choose
\begin{equation}
\alpha = \frac{ (b,b) }{ (Ab,b) }
\end{equation}
and GMRES will choose
\begin{equation}
\alpha = \frac{ (Ab,b) }{ (A^2b,b) }.
\end{equation}
If $ A $ is diagonal with entries $ (\epsilon,1,1,1,\ldots) $ and $ b = (1,1,0,0,0,\ldots) $ then as $ \epsilon \rightarrow 0 $ the first CG step becomes twice as large as the first GMRES step. Probably you can construct $ A $ and $ b $ so that this factor of two difference continues throughout the iteration, but I doubt it gets any worse than that.
If there are some theoretical studies out there I'd love to see them, but having asked some of the numerical linear algebra experts in my department it doesn't seem that there is yet a precise theoretical analysis showing what happens with different norms.
– Reid.Atcheson Jul 08 '13 at 05:27