Cope chapter

Discussion in 'P2.T7. Operational Risk & ERM (25%)' started by shanlane, May 9, 2012.

  1. shanlane

    shanlane Active Member


    One of the aims tells us we need to "determine the amount of data required to estimate percentiles of loss distributions"

    The one example of this in the notes (and in the source reading) is extrmemly confusing. Is there any way to sum up what they are looking for in a clean formula or algorithm?

    The chapter just throws numbers around and its impossible to tell where they are coming from. Is there any way you could show how they came up with (for instance) 277,500 for the pareto or 8400 for the exponential?

    Also, from the notes, does the 100,000 samples come from the fact that there are 100 losses per year and to find the 99.9% we need to somehow be 100 times more acurate and that is why we need the 99.999%?


    • Like Like x 1
  2. David Harper CFA FRM

    David Harper CFA FRM David Harper CFA FRM (test) Staff Member

    Hi Shannon,

    I think the key (summary) formula is his (2), which corresponds to Jorion's 5.17 (Jorion has a plainer language): standard error (quantile estimator) = 1/f(qp)*SQRT[p*(1-p)/n], where f(qp) is the density PDF.

    The broad point, of course, is that the standard error of the sample quantile is (frustratingly) large in relative terms and that it's quite difficult to reduce the standard error. There are three variables:
    • the PDF, f(.), which means that quantile estimation accuracy does depend on the distribution; i.e., Cope's numbers are merely illustrative of a lognormal
    • the probability (p), which is our thematic irony: the higher the probability (where we are interested), the wider the standard error (less precise our estimate)
    • the number of trials (sample size, n). The precision scales NOT with n, but with the SQRT(n). Notice in Table 1: from 1,000 to 10,000 data points does not 10X the precision, it only increases precision ~SQRT(10) from 20 to 7.7. Frankly, this is probably the only key point from a testability perspective; i.e., for example, that to increase the precision of our quantile estimator by a factor of X, we need a sample that is X^2 larger. To increase our precision by 10x, we need at least 100x the sample size. (the particulars of cope's 100 #'s relate to his definition of relative error, i don't have time/inclination to go thru the math, it's just not central, it's just choices he makes to illustrate the properties of the standard error).
    i hope that helps, thanks,
  3. shanlane

    shanlane Active Member

    It is certainly helpful.

    I have no doubt about your conclusions, but if our p was 90%, the p*(1-p) would be 0.09. If p was .99, then this product would be .0099. An increase in p seems to decrease this amount.

    Also, Dowd uses this for a VaR confidence interval but uses the area under the pdf, not the "height" of the pdf at that point. Is this inconsistent?

    Finally, if f() is the "height" it would seem that a fat tailed distribution would have a greater value and since it is in the denominator a greater number would reduce the SE.

    This is terribly confusing.


  4. David Harper CFA FRM

    David Harper CFA FRM David Harper CFA FRM (test) Staff Member

    Hi Shannon,

    But (p) informs the pdf denominator also. For example, let f(.) be the pdf, under the normal (approximations):
    • p = 95% --> f(1.645) ~= 0.103 such that SE(@ 95%) = SQRT[p*(1-p)]/f(1.645) = 0.218/0.103 = 2.11, compare to:
    • p = 99% --> f(2.33) ~= 0.027 such that SE (@ 99%) = SQRT[p*(1-p)]/f(2.33) = 0.0995/0.0267 = 3.73, compare to
    • p = 99% = 0.032/0.0034 = 9.34
    • decreasing pdf(.) is more than offsetting increasing numerator
    • but also, to my original point about Cope's illustration, the issue of "accuracy" can be variously defined. Cope does not just define as absolute value of the standard error but as a percent of true value.
    Re: dowd: no, it's not inconsistent, Dowd is using bins to infer the pdf, which is a more general approach as he understands: in practice we don't have a function, we have a empirical dataset. He may use the normal/function/whatever to communicate with the familiar, but his bin approach should approximate/converge and is ultimately more robust (I think) to actual, empirical applications. (if you just have a histogram, you need to define bins to retrieve a pdf. Or, if you can without bins, i am not aware)

    Re: "Finally, if f() is the "height" it would seem that a fat tailed distribution would have a greater value and since it is in the denominator a greater number would reduce the SE."
    Yes, TRUE. I don't think anybody said that a heavy-tailed distribution --> higher SE. Rather, just that the quantile is variant to the distribution. This is different than: GIVEN a distribution, higher p --> higher SE

    • Like Like x 1
  5. shanlane

    shanlane Active Member

    Thank you for the very informative answer. I will try to wrap my head around the whole Dowd thing.

    Regarding the "fat tailed" comment, it was not said explicitly, but was certainly inferred when Cope said that in order to get the SE down to 10% (or whatever it was) it took 250,000 or 300,000 for a Pareto distribution but less than 10,000 for (I think) an exponential distribution.

    I thought that was one of the main points: if the tail was fat, we would need more samples to get the SE down to a reasonable level. The way I read that is that if we had the same number of samples from a light tailed and a heavy tailed dist that the fat tailed would have the higher SE.

    Thanks again!

    One more week :eek:!!

    • Like Like x 1
  6. David Harper CFA FRM

    David Harper CFA FRM David Harper CFA FRM (test) Staff Member

    Hi Shannon,

    You are right, I surely did not represent Cope, for he writes "It is significantly more difficult to get accurate quantile estimates for heavy-tailed distributions" using the exponential as an example.

    Frankly, I am not following the intuition of that assertion, either :( I wondered if "relative error" explained, but that seems to go in the other direction: heaver tail implies higher quantile, ceteris paribus, and relative error [i.e., SE/quantile] as a ratio would DECREASE for a given standard error (i.e, the higher quantile lowers the error as a ratio).

    However, I do notice, if i use the student's t for a barely heavy tailed distribution:
    • the 99% pdf is less than the normal pdf @ 99%:
    • e.g., at 30 df, 99% student's t pdf = T.DIST(T.INV(99%,30),30, false = pdf) = 0.02306
      versus normal pdf (@ 99%) = 0.0267; i.e., the 1% quantile is "further out to the right" than the normal 1%, such that it's pdf "height" is lower!
    But honestly this does not get me to an intuitive understanding of the statement, thanks,
  7. shanlane

    shanlane Active Member

    Intuitively I think it makes sense, because if there were fat tails there would be more of a chance of some HUGE outliers and this would increase the SE but the mathematics of it (from that formula, at least) does not seem to work. Does that make sense?

  8. David Harper CFA FRM

    David Harper CFA FRM David Harper CFA FRM (test) Staff Member

    Yes, that resonates as intuitive: for a heavier tail, the sampling variation seems like it would be larger (the retrieved quantile would seem to have more sampling "wiggle room" in the heavy-tail) ... i agree with that ... and I also cannot connect it to the formula :eek: , thanks!
    • Like Like x 1
  9. shanlane

    shanlane Active Member

    Glad I could help out!


    • Like Like x 2
  10. Ashwin FRM

    Ashwin FRM New Member

    covered in Cope

Share This Page