There are many measures of complexity, they were actively studied by researchers exploring the topic of bias-variance tradeoff. There's no single do definition because measuring complexity is hard. For example, it's not a mere count of neurons because the outputs of some neurons are inputs to others, so they interact. Some neurons may be redundant, so we should not count them. The research on the pruning of neural networks has shown that many networks can be reduced to smaller ones without loss of performance, so they include a smaller sub-network that does all the work.
Another problem is that if we had a single, reliable measure of model complexity, what is more, important is the quality of data, not quantity. If you had a few billion samples that were nearly identical, or heavily biased, they wouldn't be any better than just a bunch of high-quality random samples. The data that we usually feed to the machine learning algorithms are rarely sampled at random and representative.