0

Is there a general conclusion on what one should expect from hyperparameter tuning? For instance, is it always the case that hyperparameter can only increase the performance from OK to good (say 0.75 r-score to 0.90) or is it possible to see a jump from bad to good (0.1 to 0.9) after hyperparameter tuning?

Let's say I'm working on a problem with CNN. I started by using a shallow fully connected architecture which gives me only an r-score of 0.3, and there is clearly a linear relation between the target and predictions. I'm now at the point to decided whether I should invest time on tuning the hyperparameters or is this already a flag that something else is wrong and I need to investigate on more fundamental things such as my dataset?

I understand this might be dependant on the specific problem but I'd like to hear you ideas.

xshang
  • 1
  • I think this question can't be answered well because (as you note) it will vary strongly between cases. Closely related: https://stats.stackexchange.com/questions/222179/how-to-know-that-your-machine-learning-problem-is-hopeless – mkt Jun 24 '22 at 13:39
  • Since you already stated that there is clearly a linear relationship, why do you feel the need to use CNN. Often linear regression is hard to beat in such cases and not worth the hassle. – yeahd Jun 24 '22 at 11:47
  • @yeahd There is a linear relationship between the target and prediction, not the features and target. – xshang Jun 24 '22 at 17:14

0 Answers0