Applying the Law of Iterated Expectations,
$$E[(\mathbf{x}_i\cdot \epsilon_i)(\mathbf{x}_i\cdot \epsilon_i)'] = E[\epsilon_i^2\mathbf{x}_i\mathbf{x}_i']=E\Big(E[\epsilon_i^2\mid \mathbf{x}_i]\mathbf{x}_i\mathbf{x}_i'\Big) $$
For exposition purposes, assume that the regressors are two, $X_1$ and $X_2$. Set also $E[\epsilon_i^2\mid \mathbf{x}_i] \equiv h$. Then the determinant of the matrix (where we first take the expected value and then we calculate the determinant) is
$$D=E[hX_1^2]\cdot E[hX_2^2] - \left(E[hX_1X_2]\right)^2 \neq 0$$
since it is assumed non-singular. First, it is evident that if the one variable was a linear function of the other, then the above determinant would be zero. So the assumption of non-singularity rules out full linear dependence. This also rules out a correlation coefficient $\rho_x$ equal to unity (keep that).
Now, set $Z_1 =\sqrt{h}X_1, \;\; Z_2 =\sqrt{h}X_2$. (Note that the correlation coefficient $\rho_z$ between $Z_1$ and $Z_2$ cannot be equal to unity too). Then the determinant can be written
$$D= E[Z_1^2]\cdot E[Z_2^2] - \left(E[Z_1Z_2]\right)^2$$
Without loss of generality, assume that the variables have zero mean. Then we have
$$D= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \left({\rm Cov}[Z_1,Z_2]\right)^2$$
$$= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \rho_z^2 {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$
$$\Rightarrow (1-\rho_z^2)\cdot {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$
For positive definiteness we want all leading principal minors to be greater than zero. Here, the first minor is ${\rm Var}(Z_1)>0$ and the second minor is $D>0$ since $\rho_z <|1|$.
ADDENDUM
Moving to 3 dimensions, under the expected value and the transformation to $Z$ variables, we have a variance-covariance matrix with full rank. Then it is positive semi-definite, so $D_{3\times 3} \geq 0$. But by assumption it is non-singular (so $D_{3\times 3} \neq 0$) therefore it is positive-definite since we are left only with $D_{3\times 3} > 0$.
With all leading principal minors up to three dimensions strictly positive, let's move to four dimensions. We only need to show in addition the $4\times 4$ determinant to be greater than zero. But the matrix is a covariance matrix so $D_{4\times 4} \geq 0$. But this is also the full determinant of the matrix, and since by assumption the matrix is non-singular, we have that $D_{4\times 4} > 0$. Hence, it is positive definite. Move on to five dimensions. Same reasoning. Etc.
I stress again the fact that this result depends critically on the expected value operator, which transforms the matrix into a variance-covariance one. Otherwise, the outer product of a $k\times 1$ column vector of numbers is a singular matrix.
If I wanted to generalize your answer to more than two regressors, how would I do it?
– An old man in the sea. Jan 01 '15 at 23:16