I am trying to understand theory from my Model Identification And Data Analysis course at University.
The example I am referring to is the probability of predicting a heart attack. Essentially, from my dataset Dn I take information such as age, cholesterol level, activity level etc etc and I feed them to my ideal function, which will return a conditional probability P(y=+1|x) where y=+1 means heart attack is likely to occur.
What is not exactly clear to me is why at this stage we then make a random prediction on this probability and we take the output y_red of this prediction as a result. My questions are:
- Wouldn’t it be sufficient to simply accept the conditional probability result as the output of our model?
- What does it exactly mean to generate a random prediction form a probability? I have looked up inverse transform sampling but it’s not very clear to me
Thanks
