1

Is there a measure for model complexity?

For given units of this measure how many examples do we need to train a network to get the model right and generalize?

In essence what is the relation between model complexity, number of training examples and network size?

Justaperson
  • 121
  • 4

1 Answers1

0

There are many measures of complexity, they were actively studied by researchers exploring the topic of bias-variance tradeoff. There's no single do definition because measuring complexity is hard. For example, it's not a mere count of neurons because the outputs of some neurons are inputs to others, so they interact. Some neurons may be redundant, so we should not count them. The research on the pruning of neural networks has shown that many networks can be reduced to smaller ones without loss of performance, so they include a smaller sub-network that does all the work.

Another problem is that if we had a single, reliable measure of model complexity, what is more, important is the quality of data, not quantity. If you had a few billion samples that were nearly identical, or heavily biased, they wouldn't be any better than just a bunch of high-quality random samples. The data that we usually feed to the machine learning algorithms are rarely sampled at random and representative.

Tim
  • 138,066
  • Thank you. I am looking at a deterministic system (such as controller output in industrial systems where the parameter space is small but overall true structure is complicated while still determistic cannot be modeled exactly etc). Would VC complexity be a reasonable measure? How would this measure correspond to number of random examples needed and network model parameters? Essentially you are transferring deterministic but complicated modeling to simple modeling through train neural network? – Justaperson Jun 14 '23 at 20:30
  • @Justaperson “the more the better”, is the only answer that would not give you false promises. – Tim Jun 14 '23 at 20:40