This result is a direct, simple consequence of the fact that the rank of the $p\times p$ matrix $X^\prime X$ cannot be any greater than the smaller of $n$ and $p$, which is strictly less than $p$ in this case. That makes the $p\times p$ matrix $X^\prime X$ singular, which is equivalent to the existence of a nonzero $x$ for which $X^\prime X x = 0$. Consequently $$x^\prime X^\prime X x = x 0 = 0$$ demonstrates that $X^\prime X$ is indefinite.
Although I referenced $X$ in this argument, the column-centered version of $X$ that is used in computing the covariance matrix also has dimensions $n\times p$, so the same conclusions apply to it.
Definitions
The rank of a matrix $X$ is the dimension of its image, defined to be the set of all $Xx$ as $x$ ranges among all possible vectors.
The column-centered version of a matrix is obtained by subtracting the arithmetic mean of each column from the entries in that column.
The covariance matrix of $X$ is proportional to $Y^\prime Y$ where $Y$ is the column-centered version of $X$. (Depending on convention, the factor of proportionality is $1/n$ or $1/(n-1)$.)
A square matrix $A$ is singular when it has no multiplicative inverse. Equivalently, there is a nonzero vector $x$ for which $Ax=0$. ($A$ has a nontrivial kernel.) Equivalently, the rank of $A$ is strictly less than the dimension of its image space (equal to the number of rows of $A$).
A square matrix $A$ is semi-definite when all numbers of the form $x^\prime A x$ have the same sign (or are zero), regardless of what the vector $x$ might be. According to the sign, $A$ would be called negative semi-definite or positive semi-definite.
A semi-definite square matrix $A$ is definite when the only vector $x$ for which $x^\prime A x = 0$ is the zero vector itself.
ncorrelations cannot enough differentiate from one another and do not allow the factor model to play in full accordingly. So forget FA. It is good to have n>p at least 3-5 times, practically. – ttnphns Feb 26 '16 at 11:56