Have you looked in your dataset if there is constant feature and R^2^ of all the couples of variables. 27K looks a lot.
Otherwise, doing a pca is equivalent to finding a set of linear combination of your features which are orthogonal. Ordered by the dimension explaining the most variance of the original dataset.
So you still could do a few PCA on a disjoint subset of your features. If you take only the most important PC, it will make you a new dataset on wish you could do a pca anew. (If you don't, there is no dimension reduction).
But the result will be different from the result given when applying a pca on the full dataset.
Some information will be lost when the most important PC will be taken. And this information could be unbalanced across the subset of original features.
So if you take the main two PC of the final PCA, the result will not the be the two dimension explaining the most variance of the dataset, but the two most distinct dimension of a subset of the dataset.
In fine, the interpretation could be done the same way that a classic pca, but you will have to code all the decomposition of the PC steps.
'econ'option of matlab's pca, that could be useful. – jeff Feb 04 '16 at 15:11