What's the effect of increasing the number of tail slices above VaR on ES?

David Harper CFA FRM

David Harper CFA FRM
Subscriber
The only definitive statement I can see is, as number of slices increases, the estimate converges on the true ES.
Clearly, it can and often increases toward the true ES: this is Dowd's statement (copied below) but in relation to estimating ES for a normal distribution.
And, intuitively, increasing makes sense to me: the VaRs are not equidistant, such that more slices adds more extreme tail values, pulling up (increasing) the average.
However, that does not lead me to conclude that increasing is necessarily true; e.g., maybe it's not always increasing for light-tailed distributions.
Dowd: "Of course, in using this method for practical purposes, we would want a value of n largeenough to give accurate results. To give some idea of what this might be, Table 3.2 reports some alternative ES estimates obtained using this procedure with varying values ofn. These results show that the estimated ES rises withn, and gradually converges to the true value of 2.063. These results also show that our ES estimation procedure seems to be reasonably accurate even for quite small values ofn.Anydecent computer should therefore be able to produce accurate ES estimates quickly in real time."
 

cdbsmith

Member
David,

Quick question for a little more clarity on this:

Dowd's comments on ES migrating toward a true value of 2.063 are made with the assumption of a normal distribution with a mean of zero (0) and standard deviation of one (1), which makes complete sense for a CDF where the VaRs can be determined for confidence levels of 95.1% or 95.6%, for example. But, what about ES based on discrete values? That is, if the tail slices are increased with more slices (we'll say from 10 to now 20 slices) that do not exceed the maximum loss, but instead fall somewhere between the the 95% VaR and the 100% VaR, wouldn't that decrease the ES in absolute terms? Or, is my understanding off base?

Would appreciate your comments.

Thanks,

Charles
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Charles,

Right, I am not personally conclusive, I just still can't yet imagine a realistic scenario where increasing the number of slices decreases the ES approximation.

First, let's set aside the question of why this method would even be necessary for a discrete distribution. I raise this merely because it could be a symptom of misunderstanding ES: the exact (true) 95% ES is a weighted average of losses in the 5.0% tail, and I can't currently imagine a discrete distribution where this calculation is not possible. Which begs the question, why would we want/need to approximate? This method of slicing the tail, as shown by Dowd, is an approximation method used to approximate ES when the exact ES calculation is non-trivial or otherwise inconvenient. (to me, it's much like numerical integration http://en.wikipedia.org/wiki/Numerical_integration, in fact, I'm not immediately clear on how it's different!)

But to your point, why can't i see a decrease? Let's say we use Dowd's 95.0% ES and approximate with 10 slices, which requires us to average 9 = 10 - 1 VaR slices, so 95.5%, 96.0% ..., 99.5%

If we then increase to 20 slices, then we will average 19 = 20 - 1 VaR quantiles: 95.25%, 95.50%, ... , 99.50%, 99.75%
  • Under this (basic) approach, the slices are equally sized: we add a 95.25% and also a "corresponding" 99.75%, for example.
  • All that is required is that the "new" 99.75% quantile value is further to the right, from our current ES approximation, than the "new" 95.25% is to the left (asymmetry if you will). That would "pull up" the average. In a uniform distribution, i think increasing slices would have no effect, but any decrease in the function, and I think what happens is: when we increase slices we are adding them equally according to probability, but as long as the new higher-probability values are increasing "asymmetric" with the new lower-probability values, it seems like the average must be increasing. (i.e., it seems like almost any distribution would exhibit this "asymmetry" by virtue of a decreasing pdf function). I would love to simulate this against some distributions, I am just expressing an intuition here. Thanks!
 

cdbsmith

Member
Thanks for the great response, David!

I think I understand now.

So, for exam purposes, should I expect/assume GARP to focus on ES and VaR questions using normal distributions?

Thanks,

Charles
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Sure thing Charles :) For exam purposes:
  • ES is hard to test (you can see) with continuous distributions. GARP's overwhelming favorite here is ES of a simple historical simulation: 9x% ES is simple average of worst (1-9x%) losses. You still want to "own" the concept of conditional (tail) average
  • However, I do think the approximation method (of a normal) of ES is testable, I am pretty sure there is one (or two?) in the historical sample (somewhere), but as quantiles will need to be provided, there is nothing necessarily "normal" about this type of question. You'd have to be given the data in such a way that it is really a test of concept, not distribution, anyway.
  • Re: VaR: yes, GARP loves normal-based 95% and 99% VaR (i.e., 1.645 and 2.33). That's FRM bread and butter.
So, you want to be comfortable with these two normal quantiles (95 and 99) and normal-based VaR, but for ES you might only need to emphasis the concept and discrete applications. It's hard to say: ES seems like it's getting some traction, so they could get creative. I hope that helps.
 
Last edited:
Top