0

I have a "real life" problem.

I am modelling a form of Operational risk at a Company. Each department of the company is asked for how a particular event will impact their department, in terms of cost.

Each department provides a "probability that the event occurs" (say, $p$). They then give an opinion of the severity of the event (amount of loss) at different percentiles (the percentiles being $0, 0.5, 0.75, 0.9, 1$).

For this particular department, the frequency is modelled as $ N \sim Bin(1,p)$. The severity, $X$, is modelled as a piecewise distribution; the CDF increases linearly between the given loss amounts.

I.e. the severity can be modelled as a mixture of continuous uniform distributions:

$F(x) = 0.5 * U(a,b) + 0.25*U(b,c) + 0.15*U(c,d) + 0.1*U(d,e)$

Where $U(a,b)$ is the CDF of a continuous uniform distirubtion with parameters $a, b$. And the values $a, b, c, d, e$ are the severity amounts that the department has inputted.

This then implies an aggregate distribution $S_{1}$ for this department. For which the mean and variance can be calculated.

$\underline{\text{First Question}}$: What is the CDF of the aggregate distribution $S_{1}$?

Let's say that another department offers their view of how this event will affect them. They give a new "probability that the event occurs" (say, $q$).

They will also provide severities at the given percentiles. They will then have their own aggregate distribution $S_{2}$.

The way our modelling software works, we can only take one "probability of the event occuring". We take the higher of $q$ and $p$. Let's say this is $q$.

$\underline{\text{Second Question}}$: Is there a way to "edit" the severity distribution from the first department so that, given their probability is now $q$, the new, implied aggregate distribution is "as close as possible" to $S_{1}$?

Perhaps we can find the new values $a,b,c,d,e$ so that the 1st to 5th moments of $S_{1}$ are the same as the new aggregate distribution? I'm not sure how to do this - I can get as far as mean and variance.

I appreciate the second question is a little vague - I can clarify any point. I have also done some pre-lim calculations in Excel which I can provide (how do I attach this?). Thanks very much for your thoughts.

Delvesy
  • 427
  • I don't understand your survey instrument. For instance, how many "events" is each department asked to evaluate? Then, in asking for the probability of occurrence for an event, is that based on a random draw from an urn with known outcomes? Or is it more uncertain and, therefore, is more like a likelihood without a known, a priori probabilistic structure? You clearly want to be able to convert this "probability" into a frequency to combine it with "severity," creating a classic frequency-severity map as used in actuarial insurance. Please elaborate on how you are measuring "severity." – user78229 Mar 26 '17 at 11:18
  • Let's assume that each department is only required to evaluate one event. The probability that they provide is purely the likelihood that this event is to happen in a given year (and without an a priori structure) - and is therefore the "frequency" (per year) of the event. The severity is measured by asking for loss amounts at different percentiles, i.e. we are measuring their view of severity by asking for a piecewise CDF. Does this clarify things? – Delvesy Mar 26 '17 at 11:22
  • Yes, thank you. "Loss amounts" are in dollars? I think the problem becomes one of finding a loss ratio given a likelihood of occurrence and a distribution for severity. I guess I don't understand why you have only a single likelihood and quantiles for severity. Wouldn't the likelihood of occurrence evolve in proportion to the magnitude of the loss? This means that I don't see how a loss ratio is possible at the department level. You should be able to find it on a log-log scaled scatterplot by aggregating results over departments. – user78229 Mar 26 '17 at 11:42
  • We can assume that the "Loss amounts" are in dollars, yes. We assume that the severity and likelihood of occurence are independent. We also assume that the event can only happen once in a year. I'm not sure how you are defining "Loss Ratio" or what you mean by "aggregating results over departments" - this gets into the issue of correlations between departments. I believe we are sliding off topic a little on how to model Operatonal risk rather than considering the mathematical challenge posed in the question. I appreciate your thoughts, all the same. – Delvesy Mar 26 '17 at 11:51
  • Perhaps. In actuarial insurance, a loss ratio is a well defined risk concept. Frankly, I view your simplifying assumption of independence as incorrect. I don't see how likelihood can be either independent from or constant across a range of loss amounts. Nor do I see how you can create a CDF that is in anyway relatable to a single likelihood at the level of a unique department, without aggregating across departments. Nothing said here beyond an assertion and statement of "fact" suggests that I'm wrong. – user78229 Mar 26 '17 at 12:02
  • All of that said, why not revert to a Bayesian modeling framework, treating the likelihood as a prior and the severity quantiles as empirical evidence? The answer to your question would then be found in aggregating across MCMC sampling iterations comprising the posterior information. – user78229 Mar 26 '17 at 12:11
  • The severity distribution requested is that given the event has occured. Let's consider an insurance example. Suppose the event is "WTC Type Disaster". This will have a probability of occurence (very small). If it does occur then each insurance division will have a view of the potential losses to their division on the insurance contracts they write. The 100th percentile loss could be maxing out all line sizes, for example. Under the Collective Risk Model, the severity X and frequency N are assumed independent. – Delvesy Mar 26 '17 at 12:15
  • Chat won't be that helpful as I don't have an answer to your CDF question as formulated. – user78229 Mar 26 '17 at 12:18

0 Answers0