Rather than trying to guess the appropriate number of iterations, you can evaluate the perplexity of the model at every $k$ iterations, and check whether the change is within a chosen tolerance.
For example, the scikit-learn implementation of LDA lets you set this via the evaluate_every and perp_tol parameters. This excerpt form the source code demonstrates how it's done:
if evaluate_every > 0 and (i + 1) % evaluate_every == 0:
doc_topics_distr, _ = self._e_step(X, cal_sstats=False,
random_init=False,
parallel=parallel)
bound = self.perplexity(X, doc_topics_distr,
sub_sampling=False)
if self.verbose:
print('iteration: %d, perplexity: %.4f'
% (i + 1, bound))
if last_bound and abs(last_bound - bound) < self.perp_tol:
break
last_bound = bound
self.n_iter_ += 1