$Q$ and $X$ (the question you linked discusses $X^t y$) are different, but closely related, matrices.
$X$, as you know, is the design matrix for the regression, it's columns are the predictors in your model.
Every $n \times m$ matrix (in particular the design matrix $X$) has a $QR$-factorization (also known as a $QR$-decomposition). This is a factorization of the matrix like
$$ X = QR $$
Where $Q$ is a $n \times m$ orthogonal matrix (a matrix with orthogonal columns), and $R$ is a $m \times m$ upper triangular matrix.
The $QR$ decomposition is important because it makes solving linear equations easy. Take the regression equations for example
$$ X^t X \beta = X^t y $$
If you have factored $X = QR$ then you can substitute this into the linear equations. The left hand side of the equation becomes
$$ X^t X = R^t Q^t Q R = R^t R $$
So the whole thing is
$$ R^t R \beta = R^t Q^t y $$
If $X$ is full rank, then so does $R$, and so the $R^t$ can be canceled to get
$$ R \beta = Q^t y $$
After computing $Q^t y$, this last equation can be solved by simple back substitution.
As for the quote
For a linear model fitted by lm or aov, the effects are the uncorrelated single-degree-of-freedom values obtained by projecting the data onto the successive orthogonal subspaces
The components vector $Q^t y$ are dot products, each column of $Q$ is dotted into $y$, and this number becomes a component of the resultant vector. Since the columns of $Q$ are unit vectors, this dot product does indeed give the projection of $y$ onto the columns of $Q$ (which, as you remember, are orthogonal).
If you'd like to know more, especially how the $QR$ decomposition is computed in practice, I wrote about it here.