A @Tim suggests (+1), yes, you can use more than one form of regularisation at once, but the question would be why you would use early stopping in conjunction with L2 (or other) regularisation. Early stopping is not all that easy to implement reliably, it is not always clear where the best error on the validation set is located, especially as the validation set is often rather small, and hence it's estimate of the loss may have high variance. If you use e.g. Bayesian methods, or virtual-leave-one-out cross-validation for tuning the hyper-parameters, then you can use all of the data for fitting the model, rather than reserving some for the validation set, which is also likely to give a better model.
The advantage of L2 regularisation is that over-fitting is largely controlled by just optimising a single (usually) continuous hyper-parameter and we can train the model to convergence without worrying about overfitting. Early stopping is also not as reproducible as regularisation as the number of iterations before stopping is more dependent on the random initialisation of the weights.
Note that if we initialise the weights to small random values, then early stopping is also encouraging the final values to be close to the origin, so it is likely to have a vaguely similar effect to L2 regularisation anyway.