0

I am now taking Andrew Ng's Machine Learning course on Coursera. This is my first exposure to Machine Learning. In the discussion of logistic regression, the course contents describe a decision boundary defined by parameters $\theta$, and the classification of input $X$ being defined as $\theta^TX>0$. The model for logistic regression however, uses the sigmoid function to produce probabilistic prediction rather than an a concrete one.

My question is, is there a fundamental limitation that prevents the regression from learning the decision surface directly, and making absolute predictions for test data, if we do not wish to make probabilistic predictions? I realize that we may need different cost functions, and possibly different optimization algorithms.

SPMP
  • 101
  • 3
    No, this is what SVMs do. There are serious philosophical issues with this approach in many application domains though, probabilistic prediction is a generally prefered method. – Matthew Drury Mar 11 '18 at 19:44
  • @MatthewDrury I can accept if you write an answer. – SPMP Mar 11 '18 at 19:59

0 Answers0