You need to use the matrix $loadings, not $rotmat:
x <- matrix(rnorm(600),60,10)
prc <- prcomp(x, center=TRUE, scale=TRUE)
varimax7 <- varimax(prc$rotation[,1:7])
newData <- scale(x) %*% varimax7$loadings
The matrix $rotmat is the orthogonal matrix that produces the new loadings from the unrotated ones.
EDIT as of Feb, 12, 2015:
As rightly pointed below by @amoeba (see also his/her previous post as well as another post from @ttnphns) this answer is not correct. Consider an $n\times m$ data matrix $X$. The singular value decomposition is
$$X = USV^T$$ where $V$ has as its columns the (normalized) eigenvectors of $X'X$. Now, a rotation is a change of coordinates and amounts to writing the above equality as:
$$X = (UST)(T^TV^T) = U^*V^*$$ with $T$ being an orthogonal matrix chosen to achieve a $V^*$ close to sparse (maximum contrast between entries, loosely speaking). Now, if that were all, which it is not, one could post-multiply the
equality above by $V^*$ to obtain scores $U^*$ as $X(V^*)^T$, But of course we never rotate all PC. Rather, we consider a subset of $k<m$ which provides still a decent rank-$k$ approximation of $X$,
$$X \approx (U_kS_k)(V_k^T)$$
so the rotated solution is now
$$X \approx (U_kS_kT_k)(T_k^TV_k^T) = U_k^*V_k^*$$
where now $V_k^*$ is a $k\times n$ matrix. We cannot any more simply multiply $X$ by the transpose of $V_k^*$, but rather we need to resort to one of the solutions described by @amoeba.
In other words, the solution I proposed is only correct in the particular case where it would be useless and nonsensical.
Heartfelt thanks go to @amoeba for making clear this matter to me; I have been living with this misconception for years.
One point where the note above departs from @amoeba's post is that she/he seems to associate $S$ with $V$ in $L$. I think in PCA it is more common to have $V$'s columns of norm 1 and absorb $S$ in the principal component's values. In fact, usually those are presented as linear combinations $v_i^TX$ $(i=1,\ldots,m)$ of the original (centered, perhaps scaled) variables subject to $\|v_i\|=1$. Either way is acceptable I think, and everything in between (as in biplot analysis).
FURTHER EDIT Feb. 12, 2015
As pointed out by @amoeba, even though $V_k^*$ is rectangular, the solution I proposed might still be acceptable: $V_k^*(V_k^*)^T$ would give a unit matrix and $X(V_k^*)^T \approx U_k^*$. So it all seems to hinge on the definition of scores that one prefers.
principal,prcompandprincomp, but the resulting loadings / study conclusions are very different from each other. For what I understand, prcomp and princomp do not return standardized scores nor loadings. My question is: what is the best approach? Do I really want standardized results? Isn't my codepca_iris <- prcomp(irisX, center=T, scale=T)followed byvarimax(pca_iris$rotation)$loadingsas correct as yours above? – JMarcelino Oct 01 '15 at 14:38principalprocedure, which always computes with Kaiser-normalization and eps=1e-5. There is no information so far, why on r-fiddle.org the version works correctly. So we should await updates - and I should delete all the now obsolete comments. amoeba - it would be good to update the remark in your answer accordingly. Thanks for all the cooperation! – Gottfried Helms May 19 '16 at 06:53