L2.T7.14. Operational loss distributions (Cope)

Discussion in 'Today's Daily Questions' started by David Harper CFA FRM CIPM, Oct 10, 2011.

  1. David's ProTip: Our OpRisk study is generally about the LDA approach. Get a handle first on the 56 UOMs (8 business lines * 7 event types) and the essence of LDA which is the convolution of frequency and severity distributions. Also, note: recall how VaR is not sub-additive? See below how this manifests in the "negative diversification" problem!

    AIMs: Discuss the nature of operational loss distributions. Discuss the consequences of working with heavy tailed loss data. Determine the amount of data required to estimate percentiles of loss distributions. Describe methods of extrapolating beyond the data. Explain the loss distribution approach to modeling operational risk losses. Explain the challenges in validating capital models

    Questions:

    14.1. Operational losses are typically characterized by heavy-tailed distributions. Each of the following is true about heavy-tailed operational loss data EXCEPT:
    a. Instability of estimates: A single but extreme loss can cause a dramatic change in the estimate of the distribution's mean and variance
    b. Exponentially bounded: the existence of an exponential bound implies the tail is not stable and, therefore, pooling may not be warranted
    c. Dominance of sums: A limited number (or even a single) extreme loss(es) can dominate the total loss value
    d. Dominance of mixtures: if two distributions from two units of measure (UOM) are pooled, the tail of the pooled distribution will tend to follow the distribution with the heavier tail

    14.2. Basel's Advanced Measurement Approach (AMA) requires regulatory capital for operational risk to be estimated at the 99.9 percentile of the annual total loss distribution (i.e., 99.9% confident over one-year horizon). Based on this 99.9% standard, what does Cope et conclude about the internal data collected by a bank?
    a. It is easier to get accurate quantile estimates for heavy-tailed distributions (as compared to, say, an exponential distribution): therefore internal data is always sufficient by itself
    b. It is easier to get accurate quantile estimates for heavy-tailed distributions (as compared to, say, an exponential distribution): internal data is sufficient if the bank has enough history (years > X)
    c. It is more difficult to get accurate quantile estimates for heavy-tailed distributions: but internal data alone is sufficient if we use proper non-parametric techniques
    d. It is more difficult to get accurate quantile estimates for heavy-tailed distributions: internal data is insufficient; the bank must extrapolate beyond available data with parametric methods

    14.3. According to Cope, the the valid use of a parametric model to extrapolate beyond available operational loss data depends on the situation. Consider the three situations:

    I. A single mechanism is responsible for all observed losses, and this mechanism can be assumed to produce any future losses that exceed the levels currently seen in the data.
    II. Multiple mechanisms produce different loss events, some of which may be more likely to produce extreme events
    III. The most extreme values are anomalous and/or do not follow a continuous pattern established by the decay in probabilities observed in the rest of the data.

    In which situation(s) above can we apply EVT to extrapolate "beyond the available data"?

    a. We can always apply EVT (e.g., GPD) to extrapolate in any of these situations
    b. We may be able to apply EVT (e.g., GPD) if we are in the first Situation (I.)
    c. We may be able to apply EVT (e.g., GPD) if we are in the second or third Situation (II. or III.)
    d. We can never apply EVT (e.g., GPD) to extrapolate in any of these situations

    14.4. Which best summarizes the loss distribution approach (LDA) to modeling operational risk losses?
    a. A single total loss distribution is specified, but which has a least two parameters so that frequency and severity can both be modeled
    b. Annual losses for a unit of measure (UOM) compound a parametric frequency distribution of (N) losses with an empirical severity distribution (L); dependence is implicitly handled in the empirical distribution
    c. Annual losses for a unit of measure (UOM) are the sum of a random frequency of (N) losses, where each loss (L) is i.i.d. according to a severity distribution. The (F) and (L) distributions within each UOM are independent. Dependence across units, as it impacts aggregation to the "top of the house," is modeled.
    d. Annual losses for a unit of measure (UOM) are the sum of a random frequency of (N) losses, where each loss (L) is i.i.d. according to a severity distribution. Independence is assumed both within and across the UOMs.

    14.5. According to Cope, why or when can there be "negative diversification benefits" among operational losses in a bank's units of measure (UOM)?
    a. Never, at worst the total risk equals the sum of risks across individual units (UOM)
    b. Only when the correlations between units (UOMs) are negative
    c. Even under independence of losses between units (UOMs), per the incoherence of quantiles (VaR) when distributions are heavy-tailed
    d. Even under independence of losses between units (UOMs), but only if the distributions have infinite mean

    14.6. Which best summarizes Cope's recommendation with respect to the VALIDATION of operational risk models?
    a. Similar to market risk models, backtesting is the most reliable validation approach
    b. As backtesting is not realistic, goodness-of-fits statistical tools should be favored
    c. As classic techniques with respect to loss tails are limited, external validators should only perform repeatable studies of loss distribution shapes in the range of the capital estimates
    d. As classic techniques with respect to loss tails are limited, external validators should prioritize the robustness of the model development process

    Answers:

Share This Page

loading...