TINV function returns two tail or one tail

Discussion in 'P1.T2. Quantitative Methods (20%)' started by sipanivishal, Oct 29, 2008.

  1. sipanivishal

    sipanivishal Manager-Corporate Banking

    Hi David,

    Was going through your spreadsheets.....just got stuck in two spreadsheet....gujrati distributions and student's T ...stundent t shows it gives one tail where gujrati says it gives two tail which one is correct ?

    Thanx
    Sipani
  2. Hi Sipani,

    TINV() give an inverse two-tailed, so sometimes you have to convert.
    e.g., TINV(95%, high d.f.) = 1.96 which is where the two-tailed normal is:
    TINV(95%, high d.f.) = 1.96 = NORMSINV(97.5%) which is a two-tailed normal

    so, if you want a one-tailed inverse students t @ 5%, then you need to manually double the param:
    TINV(5% * 2, d.f.)

    David
  3. sipanivishal

    sipanivishal Manager-Corporate Banking

    Hi David,

    I cross checked..it gives two tail think.....can you tell me about the P value about Chi squre......the p value functions return two tailed or one tailed......ans is there anything like two tailed and one tailed in case of f distribution...

    Thnx
    Sipani
  4. sipanivishal

    sipanivishal Manager-Corporate Banking

    Hi David,

    One more thing...when we are using Chi square or for that matter F dist hypothesis testing....dont you think that 95% or 99% is too high a probability to reject anything.....we should ideally be doing it at much lesser probability for the sake of getting a smaller interval and hence more robust result.

    Thanks
    SIpani
  5. Hi Sipani,

    "don't you think that 95% or 99% is too high a probability to reject anything.....we should ideally be doing it at much lesser probability for the sake of getting a smaller interval and hence more robust result."
    No, our confidence does not really change the properties of the sample. If, say, the market return is +10% and our sample hedge fund strategy is +12%, yes it is true we can lower the confidence until the null (i.e., null H0: hedge fund strategy = market return) can be rejected, but this is achieved only by the trade-off that we are less confident in rejecting. Put another way, the selected confidence/signifance does not change the p value.

    But, you raise an interesting point, that is much related to the Type I/Type II. We can add a dimension that *could* make your point correct: we can attach a cost to each error type. This idea underlies the basel market backtest and the credit risk model PERFORMANCE METRICS in de Servigny Ch 3. For example, the minimum-risk decision rule. Now, we may consider one of the errors (I or II) to be more damaging than the other. In this case, we may want lower the confidence, but note: this is motivated by a preference to avoid one error type over another (since they cannot simultaneously be minimized except by increasing the sample)

    So, to apply that to the above. We can make two errors about this hedge fund (say, we are considering to invest, but we want to know it outperforms). Type I = we mistakenly find the 2% is significant, Type II = we mistakenly find the fund does not really outperform.

    So, maybe you say to me, "I deem the Type II error to be worse because it means i may avoid a manager who outperforms. I would MUCH rather mistakenly invest with a mgr who does not really outperform than make the mistake of passing on a good investor."
    okay, so now you have attached a higher cost to Type II error and a "bias in favor of Type I error". Now you are justified in lowering the confidence level.
    At the end of the day, the confidence level ought to express a bias about the error types, since you cannot avoid each of them simultaneously.
    Related, that's why high VaR confidence level: the type I error we can live with (setting aside too much capital), the type II error is damaging (not setting aside enough capital). So we raise the confidence high in biasing toward a type I.

    Regarding the p-value: can you take a look at my summary gujarati comparison. I made this just for this kind of question.
    Notice at bottom: there is a p-value for each of one- and two-tail test.

    Regarding f: yes, because there is a p for each test and the F can test one or two tailed. Although, in the context of our application (i.e., are the variances drawn from the same population), it is a two-tailed test. I can not think of a one-tailed application in the context of our study.

    David
  6. ...actually, let me amend that about the F distribution. Sure, there is a one-tailed test: given two sample variances, we may ask "is the one greater than the other" (i.e., rather than, "are the sample variances different" which is our two-tailed test, really) and this would be a reasonable "risk type" query - David
  7. sipanivishal

    sipanivishal Manager-Corporate Banking

    Hi David,

    I got confused by your explanation for F distribution.I looked at the graph of F distribution and it appeared to me like a one tailed (we just check if p value greater than confidence of not).Correct me if I am wrong.

    Thanks
    SIpani
  8. Yes, i think it is confusing to speak of one/two tailed for the F distribution.

    The XLS is like a two-tailed test in the sense it is testing for whether the sample variances are inequal (<>); as opposed to, is one variance < or > the other.

    But since the F stat is defined as:

    larger variance/smaller variance

    it will always be greater than 1.0 and, in regard to the critical lookup, it will always be a (<). So the XLS is correct. As i think about it, "one or two tailed" terminology may be confusing here...i am looking at Gujarat and he doesn't seem to try and fit these terms to the F

    David

Share This Page

loading...