Using RMSE or even standard $R^2$ is somewhat unnatural for the case of a count response variable. Median absolute error/deviation (MAD) would be definitely more natural for integer values but would not directly reflect a "variance explained" quantity.
Given you are particularly interested in GLMMs (instead of GLMs) I think it will be appropriate to look at something like specific for GLMMs like using the r.squaredGLMM function implemented in R's MuMIn package. This is essentially the $R^2$ for GLMMs as this is described by Nakagawa & Schielzeth on their paper on A general and simple method for obtaining R² from generalized linear mixed-effects models. Because you are having a mixed model with fixed ($X$) and random ($Z$) covariates it makes sense to have a conditional $R^2$ as well as a marginal $R^2$.
You could also report Nagelkerke's $R^2$ (eg. using fmsb::NagelkerkeR2), it is somewhat standard for GL(M)Ms too.
Having mentioned the above $R^2$ quantities, please note that it is debatable whether or not $R^2$ measurements are really relevant for Poisson regression (or GLMMs in general). Pseudo-, generalised- $R^2$ come in many variants; see for example a list here and an excellent discussion in this CV thread here.
My advice would be to use MAD as well as a specialised $R^2$ but do not focus much on the $R^2$. Reporting $p$-values in regards to a model's performance is a bit pointless....
Regarding your latest comment: Using the caret package and cross-validation in general is an excellent idea; you should do it. Notice though that cross-validation is best to select a model's coefficients and not to directly estimate a model's performance for unseen data. Furthermore, irrespective of the $k$ used for your $k$-fold cross-validation scheme, run at least 100 rounds/repeats of your $k$-fold procedure to ensure your results are stable. To estimate out-of-sample performance use hold-out data; a chunk of your data that you never touched during training. Report MAD, (specialized) $R^2$ on the model's performance on that data. See Zack's answer on the (awesome) thread here for more details about this.
glmerbut you don't mention what kind of response variableRichnessis. Can you please elaborate on this further? I think this will make your question easier to understand. – usεr11852 Aug 13 '16 at 00:20Richness. I started to look into cross-validation using thecaretpackage inr, is this something you would advice to pursue in addition to R^2, RMSE and MAD? – Mud Warrior Aug 13 '16 at 00:45