12

For a random variable $X$ with pdf $f(x)$, the loss function* is defined as $$n(x) = \mathbb{E}[(X-x)^+] = \int_{x}^\infty (y-x)f(y)dy,$$ where $a^+ = \max\{a,0\}$. Or, for a discrete distribution, $$n(x) = \mathbb{E}[(X-x)^+] = \sum_{y=x}^\infty (y-x)f(y).$$ Loss functions are used frequently in inventory theory and other fields.

*This is different from the "loss function" used in machine learning.

For some well known probability distributions, there are explicit forms for the loss function, typically using the pdf/pmf and cdf. For example, if $X$ has a standard normal distribution, then $$n(x) = \phi(x) - x(1-\Phi(x)),$$ where $\phi(\cdot)$ and $\Phi(\cdot)$ are the standard normal pdf and cdf. And if $X$ has a Poisson($\lambda$) distribution, then $$n(x) = -(x-\lambda)(1-F(x)) + \lambda f(x).$$ These explicit forms are nice because they can be calculated without performing numerical integration or computing long sums, using pdf/pmf and cdf functions that are built into nearly every programming language and mathematical software package.

I have seen explicit forms for loss functions for a handful of distributions, but typically somewhat scattershot in the appendix of an inventory-theory textbook (e.g., Zipkin 2000). I've never found them nicely collated anywhere.

Do you know of a resource to find explicit-form loss functions for more probability distributions?

Bonus points if the resource also has complementary loss functions ($\mathbb{E}[(X-x)^-]$) and second-order loss functions ($\frac12\mathbb{E}\left[\left([X-x]^+\right)^2\right]$)!

LarrySnyder610
  • 13,141
  • 3
  • 41
  • 105

2 Answers2

12

There is indeed a paper titled Loss Distributions that provides the limited expected value functions $L(x)$ for several probability distributions (on page 15). It is directly related to the first-order loss function $n(x)$ through $$n(x)=\Bbb E(X)-L(x)\tag1$$ and notice that the loss function can also be written as $$n(x)=\int_x^\infty yf(y)\,dy-x(1-F(x))\tag2$$ after splitting the term $(y-x)$. The expressions for $L(x)$ and $\Bbb E(X)$ are tabulated below.

\begin{array}{c|c}\small\sf{Distribution}&L(x)&\Bbb E(X)\\\hline\small\sf{Log-Normal}&e^{\mu+\frac{\sigma^2}2}\Phi\left(\frac{\ln x-\mu-\sigma^2}{\sigma}\right)+x\left[1-\Phi\left(\frac{\ln x-\mu}{\sigma}\right)\right]&e^{\mu+\frac{\sigma^2}2}\\\hline\small\sf{Exponential}&\frac1\lambda\left(1-e^{-\lambda x}\right)&\frac1\lambda\\\hline\small\sf{Pareto}&\frac{\beta-\beta^\alpha(x+\beta)^{1-\alpha}}{\alpha-1}&\frac{\alpha\beta}{\alpha-1}\\\hline \small\sf{Burr}&\small\frac{\lambda^{1/\tau}\Gamma\left(\alpha-\frac1\tau\right)\Gamma\left(1+\frac1\tau\right)}{\Gamma(\alpha)}{\rm B}\left(1+\frac1\tau,\alpha-\frac1\tau;\frac{x^\tau}{\lambda+x^\tau}\right)+x\left(\frac\lambda{\lambda+x^\tau}\right)^\alpha&\frac{\lambda^{1/\tau}\Gamma\left(\alpha-\frac1\tau\right)\Gamma\left(1+\frac1\tau\right)}{\Gamma(\alpha)}\\\hline\small\sf{Weibull}&\frac{\Gamma\left(1+\frac1\tau\right)}{\beta^{1/\tau}}\Gamma\left(1+\frac1\tau,\beta x^\alpha\right)+xe^{-\beta x^\alpha}&\frac{\Gamma\left(1+\frac1\tau\right)}{\beta^{1/\tau}}\\\hline\small\sf{Gamma}&\frac\alpha\beta F(x,\alpha+1,\beta)+x(1-F(x,\alpha,\beta))&\frac\alpha\beta\end{array}

Notice how in the majority of cases, $\Bbb E(X)$ is the same as the starting coefficient of $L(x)$. Of course, $n(x)$ can be found using $(1)$.

In particular, for extensive details on the first-order loss function (and its complementary function) for the normal distribution, I highly recommend Piecewise linear approximations of the standard normal first order loss function.

For a more generic and heuristic approach on any type of distribution, there is a follow-up paper on Piecewise linearisation of the first order loss function for families of arbitrarily distributed random variables.


References

[1] Burnecki, K., Misiorek, A., Weron, R. (2010). Loss Distributions. MPRA Paper No. 22163. Available from: https://mpra.ub.uni-muenchen.de/22163/2/MPRA_paper_22163.pdf.

[2] Rossi, R., Tarim, S.A., Prestwich, S., Hnich, B. (2013). Piecewise linear approximations of the standard normal first order loss function. Available from: https://arxiv.org/pdf/1307.1708.pdf.

[3] Rossi, R., Hendrix, E.M.T. (2014). Piecewise linearisation of the first order loss function for families of arbitrarily distributed random variables. Proceedings of MAGO. pp. 1-4. Available from: https://gwr3n.github.io/chapters/Rossi_et_al_MAGO_2014_2.pdf.

7

Article: https://www.researchgate.net/publication/369926923_Loss_functions_for_inventory_control

Let $L_1(r)$, $L_c(r)$ and $L_2(r)$ be the first-order, complementary and second-order loss functions respectively. Also, define $F(x)$ as the CDF of a distribution defined at $x$ with a set of parameters and $f(x)$ as the PDF or PMF (continuous vs discrete) defined at $x$ with a set of parameters.

Note: Observe that from the answer of @TheSimpliFire, the first-order loss function can be expressed as$$ n(x)=\Bbb E(X)-L(x).$$

In this post $n(x)$ is given directly. So for example, in the case of the exponential distribution, you can see that

$$n(x)=\frac{1}{\lambda}-\left(\frac{1}{\lambda}(1-e^{-\lambda x})\right)$$, which gives $$n(x) = \frac{e^{-\lambda x}}{\lambda}$$, which is the same as $L_1(r) = \frac{e^{-\beta r}}{\beta}$, where $\lambda=\beta$.

Normal distribution

$p = \frac{r-\mu}{\sigma}$.

Here, $F(.)$ and $f(.)$ are the CDF and PDF of the standard normal distribution respectively. $$L_1(r) = (\mu-r)[1-F(p)]+\sigma f(p)$$ $$L_c(r) = (r-\mu)F(p)+\sigma f(p)$$ $$L_2(r) = \frac{1}{2}[(r-\mu)^2+\sigma^2][1-F(p)]-\frac{1}{2}\sigma f(p)[r-\mu]$$

Log-normal distribution

$p_1 = \frac{ln(r)-\mu-2\sigma^2}{\sigma}$, $p_2 = \frac{ln(r)-\mu-\sigma^2}{\sigma}$, $p_3 = \frac{ln(r)-\mu}{\sigma}$

$$L_1(r) = e^{\mu+\frac{\sigma^2}{2}}[1-F(p_2)]-r[1-F(p_3)]$$ $$L_c(r) = rF(p_3) - e^{\mu+\frac{\sigma^2}{2}}F(p_2)$$ $$L_2(r) = \frac{r^2}{2}[1-F(p_3)]-re^{\mu+\frac{\sigma^2}{2}}[1-F(p_2)]+\frac{1}{2}e^{2\left(\mu+\frac{\sigma^2}{2}\right)}[1-F(P_1)]$$

Exponential distribution

$$L_1(r) = \frac{e^{-\beta r}}{\beta}$$ $$L_c(r) = r-\left(\frac{1-e^{-\beta r}}{\beta}\right)$$ $$L_2(r) = \frac{e^{-\beta r}}{\beta^2}$$

Gamma distribution

$$L_1(r) = \frac{\alpha}{\beta}[1-F(r; \alpha+1, \beta)]-r[1-F(r; \alpha, \beta)]$$ $$L_c(r) = rF(r; \alpha, \beta)-\frac{\alpha}{\beta}F(r; \alpha+1, \beta)$$ $$L_2(r) = \frac{r^2}{2}[1-F(r; \alpha, \beta)]-\frac{r\alpha}{\beta}[1-F(r; \alpha+1, \beta)]+\frac{\alpha(\alpha+1)}{2\beta^2}[1-F(r;\alpha+2, \beta)]$$

Negative binomial distribution

$$L_1(r) = \frac{np}{1-p}[1-F(r-2; n+1, p)]-r[1-F(r-1; n, p)]$$ $$L_c(r) = rF(r-1; n, p) - \frac{np}{1-p}F(r-2; n+1, p)$$ $$L_2(r) = \left(\frac{r^2+r}{2}\right)[1-F(r-1; n, p)]-\frac{rnp}{(1-p)}[1-F(r-2;n+1, p)]+\frac{(np)^2+np^2}{2(1-p)^2}[1-F(r-3; n+2, p)]$$

Geometric distribution

$$L_1(r) = \frac{(1-p)^r}{p}$$ $$L_c(r) = \frac{(1-p)^r+pr-1}{p}$$ $$L_2(r) = \frac{(1-p)^{r+1}}{p^2}$$

Logarithmic distribution

$\beta = -\frac{1}{\ln(1-p)}$

$$L_1(r) = \frac{\beta p^r}{1-p}-r[1-F(r-1)]$$ $$L_c(r) = rF(r) - \beta\left[\frac{1-p^{r+1}}{1-p}-1\right]$$ $$L_2(r) = \frac{1}{2}[r^2+r][1-F(r-1)]+\frac{\beta (2r+1)p^r}{2(p-1)}-\frac{\beta p^r[p(r-1)-r]}{2(1-p)^2}$$

Poisson distribution

$$L_1(r) = -(r-\lambda)[1-F(r)]+\lambda f(r)$$ $$L_c(r) = (r-\lambda)F(r) + \lambda f(r)$$ $$L_2(r) = \frac{1}{2}\left([(r-\lambda)^2 + r][1-F(r)]-\lambda(r-\lambda)f(r)\right)$$

Steven01123581321
  • 1,043
  • 6
  • 12