0

I got a model trained on a regression task, that is predicting the severity of cancer from 0 to 5. Then my supervisor told me to validate on other datasets. I found one but this has two differences. First one is image size, model was trained on 1536 and now is going to predict 96. Second one is task, model was trained on regression and now is going to do binary classification.

So, how to tackle this two challenges? I researched on internet but found no related articles. Thank you.

1 Answers1

0

I think the image size shouldnt be an issue if you can resize/downsample the image. If your first layer is a convolutional layer it should still work even if you dont resize (but you might need to adjust the padding).

For the second point, if it is a binary classification task then you can use a portion of the data to determine the threshold at which the output of the regression model should be considered a positive class prediction. Try a range of thresholds and choose the one that give the best metric on your held out portion of the data.

dmh
  • 266
  • Thank you. The model can work but I am thinking performance and accuracy. I tried automatically find the threshold by scipy.minimize.optimize but don't know why it is not working (the iniitial threshold is not changing at all). For image size, I tried 1) feeding 96, 2) resizing to 1536 and feeding it. No observed difference. But if so, I prefer 96 since it would highly inefficient to resize the orginal size from 96 to 1536. I have previous experients to show that image size matters in medical domain. I rememberd a model trained on 1536 had worse effect when predicting 1024. – Shark Deng Sep 25 '20 at 04:26