TL;DR: I don't think you will be able to beat any real/vendor eigensolver, but it's fun to think about.
Deeper cut: One classic algorithm for symmetric eigendecomposition is tridiagonalizing A=QTQ' via householder methods, followed up by QR iteration upon T. The QR iteration is (very) loosely based on the iteration: [Q,R] = A; A = R*Q. That is, alternating between QR decomposition and then multiplying them out in reverse order. In the limit of many iterations, A will converge to a diagonal matrix (thus displaying the eigenvalues) and is also similar (same eigenvalues) to the original input.
For symmetric positive definite A, I think you could in theory beat this algorithm using a treppeniteration-like method based on Cholesky decomposition [Consult Golub & Van Loan 3rd ed, chapter 8, problem 8.2.1]. It would be (very) loosely based on the iteration [G,G'] = chol(A); A = G'*G. That is, computing a Cholesky decomposition and then multiplying them in reverse order. Remarkably, this converges to a diagonal matrix too, which is similar to the original input. The departure from orthogonal iterations is mild cause for concern, but fortunately the Cholesky decomposition is quite stable, too. You would also want to "frontend" this algorithm using householder tridiagonalization, so that all the A's in question become tridiagonal and the Cholesky's are all banded ones, with band=1. Fundamentally, band=1 Cholesky is less flops/simpler than band=1 QR via givens rotations, so that's how you could (possibly) come out on top.
I think you could also "accumulate" all these similarity transforms as you go (essentially, banded backsolution steps), to build the eigenvectors. This is much how the givens rotations in are accumulated in the classic QR iteration. If this is unworkable for some reason, you can always use inverse iteration at the end once you have the eigenvalues in hand.
All that said, real/vendor solvers are not just performing the dumb QR/RQ/QR/RQ iteration .. there's (at the very least) shifting, plus a whole wider class of (faster!) algorithms in the field (divide and conquer, MRR, bisection, etc). This is a ridiculous amount of machinery/cumulative improvements to try to compete against .. many (!!) man-years of effort have been put into the development of these algorithms and their implementations (EISPACK/LAPACK/MKL/etc). I think it's an interesting thought experiment (wow, an eigensolver built from such a simple decomposition) but not a very practical one.