In the paper Matrix Factorization Techniques for Recommender Systems Koren, Bell and Volinsky describe how the matrix $R_{n \times k}$ (users $\times$ movie ratings) can be decomposed to $P_{n \times m}$ and $Q_{m \times k}$ matrices using a matrix factorization that uses stochastic gradient descent. So we approximate $\hat{r}_{iu} = q_i^Tp_u$ and the prediction error is
$$ e_{iu} = r_{iu} - q_i^Tp_u $$
what leads to the SGD update rule
$$ q_i \leftarrow q_i + \gamma (e_{ui} p_u-\lambda q_i) \\ p_u \leftarrow p_u + \gamma (e_{ui} q_i-\lambda p_u) $$
where $\lambda$ is the regularization parameter and $\gamma$ is the learning rate. The algorithm iterates starting from the randomly initialized $P$ and $Q$.
What exactly can we do when new episode of Star Wars appears and our matrix expands to $R_{n \times k+1}$? The authors seem to say that the algorithm can handle that, but don't say how. I guess that we simply expand $Q_{m \times k}$ to $Q_{m \times k+1}$ with randomly initialized values, or is there a better approach?