0

I'm a beginner in statistics, so I'm not sure if this has been asked before. I've looked, but I couldn't find an answer.

So I'm trying to use Bayes' theorem to build a probability distribution describing the probability of an asset to fail in X years. For example, I would ask "what is the probability that my asset fails in 30 years" and I would want the PDF to give me back a number.

The reason I'd like to use a Beta distribution is because I understand it, there is a lot of information available about it, and by changing the parameters I can get any shape that I wish (to set up the prior). But studying up on it, it seems that input random variable X is usually itself a probability between the values of [0, 1]. Of course, it is not too difficult to scale X to give a range between [0, 100] for example via just multiplying X by a scaling factor and dividing the output pdf Y by that same scaling factor.

My question is: "Is using the beta distribution for lifetime prediction an accepted practice?"

2 Answers2

1

No, it is not. There are other distributions that "tell a story" that fits better with lifetime data. The simplest example would be the exponential that assumes a constant hazard. Other distributions like Weibull or Gompertz tell a bit more complicated/richer story. However, often the distribution is completely ignored by using a Cox model.

Maarten Buis
  • 21,005
1

To see why it’s not recommended, think about a Beta scaled to (0, 100). This would make everything below 100 (but positive) a live possibility, but everything above 100 completely impossible (along with 100 itself, but we’ll leave that nuance aside). Does this make sense? Even if the probabilities become minuscule above 100, why would we want to declare it utterly impossible for failure to occur that late? What’s so special about 100 as a cutoff? One reason Exponential and Weibull distributions are preferred is that they support all positive values, avoiding an artificial cutoff beyond which survival is completely ruled out.