0

I'm trying to create a regression MLP to predict a certain parameter using input data (which I have called indicators). Below I've put graphs corresponding to 570 entries of indicators 1 to 4 and parameters 1 and 2. Graphs between indicators and parameters

As seen, there is a strong dependence between the indicators and parameter 1, but not so much for parameter 2. I've created an MLP that uses Repeated K fold cross-validation (10 folds, 5 repeats) on the following model: Input dim: 4, hidden layer dim: 4 (relu activation), output layer dim: 1, optimizer: adam, loss: mean_absolute_error_percentage (I chose this just to understand better). The results I get for the mean final validation losses are:

  1. Parameter 1: Mean: 37.780 SD: 18.929 (mean is around 30% when using a dropout of 0.2)
  2. Parameter 2: Mean: 13.796 SD: 3.824 (mean is around 10% when using a dropout of 0.2)

I've repeated the exercies with different number of hidden layers (1,2,3) and perceptrons per layer (1 to 10), different values of dropout (0.2 to 0.5), different learning rates (0.01, 0.001, 0.0001) but there is always at least a 20-25% lower error in the validation of parameter 2. I feel like this is because of a big flaw in the MLP that I'm unable to figure out, or is it expected?

0 Answers0