PCA can be evaluated based on the variance of each principal
component generated.
Actually it measures the same thing as reconstruction error, but in a different way.
Let's fix $k$ and put this more precisely
$X$ - data matrix, $X'$ - best rank-$k$ approximation (rank-$k$ PCA).
To calculate $X'$ you need to do SVD and then only take $k$ singular vectors with biggest singular values.
Eckart-Young theorem then tells us that this $X'$ also minimizes Frobenius norm of $X-X'$. Frobenius norm is defined as
$$\|X-X'\|_F = \sqrt{\sum_{n,m}(X_{n,m} - X'_{n,m})^2}$$
So
$$\|X-X'\|_F^2 =\sum_{n,m}(X_{n,m} - X'_{n,m})^2 = \sum_{n}\|X_n - X'_n\|^2$$
The last expression is the reconstruction error.
Back to evaluating PCA
The above fragment just says that for fixed rank $k$ we know how to find reconstruction error. I think you mentioned the fact that you can also easily evaluate how the reconstruction changes when you vary the $k$.
This is where evaluating autoencoder and PCA diverges: the latent variables of autoencoder aren't guaranteed to be orthogonal. That means you can't decompose reconstruction error as in the case of PCA. Also since the coding/decoding in autoencoders is nonlinear, you don't know how the variance in the latent space translates to variance in input space.