0

I have a dataset with 22 parameters and I did a PolynomialFeatures = 2 to find the influence of interaction. This was then fitted to an Artificial Neural Network with the lowest loss being around 0.02.

Now I want to find the position of the global optimum, which is the negative of the global minimum. How do I find the position of the global optimum with PyTorch?

Sycorax
  • 90,934
wowewow
  • 51

1 Answers1

4

The description of the particular network is not specific enough to understand what the model is, or how it works. Additionally, the terminology seems confused because datasets don't have parameters, they have features.

However, the specifics of the network is not material to answering the question. In general, NNs don't have unique global optima because the loss function is non-convex (from the perspective of the network parameters). Some elaboration:

Finding a global optimum of a high-dimensional, non-convex function such as a neural network is typically intractable due to the very large size of the parameter space. Global optimization is hard in a small number of dimensions, and becomes much harder as the number of dimensions increases. Even the search space is "small," global optimization is a challenging task. See:

Moreover, local optima are often very high quality for neural networks. See:

These facts are common to neural networks in general, and not specific to PyTorch or any other neural network software.

Sycorax
  • 90,934