As you noticed, $m$ stands for the number of parameters, and $w$ stands for the weights. If you understand it in the context of regression, the same idea applies to neural networks. The generalized linear model (like linear regression, logistic regression, etc) is
$$
E[y|X] = \sigma(\, w_1 x_1 + w_2 x_2 + \dots + w_m x_m + b \,)
$$
where $w_i$ are the regression coefficients (weights), $b$ is the bias term, and $\sigma$ is the inverse of the link function (or activation function, in neural network terms). This basically is a single neuron of a neural network, the difference is that neural networks have many such neurons and many layers where each layer takes some input, and its outputs are served as inputs (the $x$'s) to the next layer. So for a neural network, the $m$ would be the sum of all the parameters for all the neurons.
However, notice that the formula you quote can be simplified. Without going into details, the bare bones $L_2$ regularization is
$$
\text{loss}(y, f(X)) \, + \lambda\, \| \boldsymbol{w} \|^2_2
$$
Scaling by $m$ is needed only to make the $\lambda$ term easier to interpret, but it doesn't really matter because you find the $\lambda$ using a hyperparameter tuning algorithm and don't really care what it is.
For a recap on regularization, you can check Andrew Ng's lecture.