What's new

Characteristics desirable for VaR estimates

Thread starter #1
I'm having trouble understanding much of the content (bolded) for this AIM. Can someone explain the following ?

"The first desirable attribute is unbiasedness. Specifically, we require that the VaR estimate be the x% tail. Put differently, we require that the average of the indicator variable I(t) should be x%: This attribute alone is an insufficient benchmark. To see this, consider the case of a VaR estimate which is constant through time, but is also highly precise unconditionally (i.e., achieves an average VaR probability which is close to x%). To the extent that tail probability is cyclical, the occurrences of violations of the VaR estimate will be “bunched up”. This is a very undesirable property, since we require dynamic updating which is sensitive to market

How is a VaR constant through time ?
- Does this just mean that the VaR estimate was the same over multiple time series ie: yesterday my 10 day VaR was X, today my 10 day VaR is X, each day for the past week my 10 day VaR was x ?

What is a VaR probability, tail probability, cyclical tail probability (Googling tail probability didn't help) ?

"the second attribute which we require of a VaR estimate is that extreme events do not “bunch up”. Put differently, a VaR estimate should increase as the tail of the distribution rises. If a large return is observed today, the VaR should rise to make the probability of another tail event exactly x% tomorrow."

Am I correct in my understanding that by "extreme events" the author means losses in excess of VaR ?

Thanks !!
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Staff member
Hi @afterworkguinness

These are thoughtful questions, I think! (sorry for delay, I am back from vacation).
  • Yes I do agree that a "VaR estimate which is constant through time" could include "[a] VaR estimate was the same over multiple time series ie: yesterday my 10 day VaR was X, today my 10 day VaR is X, each day for the past week my 10 day VaR was x." Although I can imagine the realization of a constant VaR estimate in multiple ways. To me, the most obvious is simply a VaR estimate which is not updated daily, and therefore is constant in-between updates.
    • For example, say the VaR estimate is (for some reason) updated quarterly, with T = 20*3 = 60 trading days. A 95% VaR expects 3 exceedences over the quarter. But, instead of 1 per month, they "bunch:" 0 in first month, 0 in second month, and 3 in the third month.
    • In this illustrated scenario, the VaR estimate (i.e., the 95% loss level) was both constant and was precisely matched by I(t) = 5.0%, however, if we want to scrutinize the "monthly cyclicality," is overestimated the "true" VaR in the first two months (it expected one loss but experienced zero) and underestimated the true VaR in the third month (it expected one loss but experienced three). If the variation is truly time-variant, this can be interpreted as unconditional precision (total of 3 = 5% is a perfect match) but conditionally imprecise (each month was wrong).
    • However it is not the only way to get a "constant VaR estimate:" the most obvious is simply to employ a naive approach. For example, to compute a historical standard deviation once and never update it (what Linda Allen calls the unconditional STDEV).
  • I agree that "VaR probability" is an interesting phrase, but I'm pretty confident they are simply rephrasing the essential statistical idea here: avg[I(t)] = x%; i.e, "we require that the average of the indicator variable I(t)should be x%."
    • I would interpret "VaR probability" here as "the realized (as opposed to expected) losses, as a percentage, in excess of the VaR level."
    • For example, in the case of a 95% confident VaR--which this paper (Boudoukh) happens to refer to as a 5% VaR because the refer to the 5% significance instead of the corresponding 95% confidence--we expect the avg[I(t)] to be 5.0%, per the unbiased property itself, for a given VaR(t). But over an actual time period (e.g., month), say perhaps the VaR loss level, VaR(t), is exceeded 6.0% of the days.
    • Consider then three different measures:
      • The 5% in "5% VaR" refers to an expected loss level which we expect (hope) corresponds to the 5% loss tail. This is the value we selected.
      • The 6% is the realized or actual percentage of losses in excess of the 5% VaR level. This is, in my opinion, what the authors mean by "VaR probability" as this is their I(t) variable
      • Note there can still be (will still be) several I(t) values, one for each measured window, which can be overlapping. In which case, there is also an avg[I(t)].
  • Re: "Am I correct in my understanding that by "extreme events" the author means losses in excess of VaR?" Yes, in this paper. Your observation, IMO, is justified, as normally "extreme" connotes the extreme tail beyond the typical VaR (e.g., 99.99%). But, this paper appears to make no such distinctions and appears to mean "extreme" to refer to simply the VaR tail. Please note they write on page 2: " ... extreme percentiles of the distribution (e.g., the 1% or 5% VaR) are notoriously hard to estimate with little data"
I hope that helps, I appreciate your specific observations, it helps me too to think more about the concepts in this paper :)
Last edited:
this may only be a minor issue, but I am having trouble "seeing" the observed bias in table 1 in the notes. (AIM: Summarize the study results using the various VaR measurement approaches.). Does this "observed bias" mean that the dispersion is seemingly greater at 1% than at 5%?
Thanks in advance!
Hi @JDGutzmann

Not really dispersion. Table 1 is displaying (what we would call) percentage of exceptions (aka, exceedences): the percentage of daily losses that exceed the 5% or 1% VaR. So for example, unadjusted BRW approach for CHF produces 5.25% (3rd row and 3rd column of the data-only). This means the daily CHF loss was greater than the VaR (per a distributional assumption) on 5.25% of the total days. The lack of an asterisk indicates this is not statistically different than 5.0%; i.e., they could not reject the null hypothesis that true (population) tail is 5.0%. In short, cannot reject the 5.0% VaR model!

However, under 1% Table, for example, BEF is 1.56% (first asterisk). We don't have the standard error, but 1.56% is statistically different than 1.0%, so they reject the 99% (=1%) VaR model as accurate. I hope that explains!