What's new

Merton model, a summary of the issues

#41
@David Harper,
Since this topic focus on the Merton Model, may I post a related question?
The answer is to calculate the Dt by present value of D minus put option, my question is,
why not using call option to calculate the Et directly? Many thanks
 

Attachments

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #42
Hi Jonathan, you don't have asset volatility which you want to price the call. Without asset vol, you need to infer call from the put. However, at quick glance, the problem is that the value of the equity (under Merton) should equal the price of the Euro call option on firm assets with strike = FV of debt; i.e., equity would be the $1.00. But the $1.00 is internally inconsistent, the price of the call must be at least 180 - 107*exp(-5%); i.e., minimum value.

Notice you can also use put call parity: if the put is priced at 1.50, then the price of the call option should be the value of the equity = 180-107*exp(-5%)+1.5 ~= $79.72; i.e., c+K*exp(-rT) = S + c. Same result as what the question is looking for: risky debt = riskfree debt - put = 107*exp(-5%) - 1.50 with equity = asset - risky debt = 180 - [107*exp(-5%) - 1.50] = 79.72. But the question's assumption that the call = $1.00 cannot be true,
 
#44
Hi David,

Pls can I confirm I have the right formula, if LT/ST > 1.5
KMV appx strike price default point is ST+(0.7-0.3*ST/LT)*LT
In the study notes I saw the formula like this ST+(0.7-0.3*ST/LT)

Thanks
Sunny
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #45
Hi Sunny - Apologies, but the study note contains an error. According to the de Servigny reading, if LT/ST > 1.5, then the default point = ST+(0.7-0.3*ST/LT)*LT, as you suggest. This has the effect, for LT/ST > 1.5, of implying default point = ST + [50% to ~ < 70%]*LT. Thanks,
 
#46
quick (and probably stupid) question..

you give DD = (LN[V(0)/F(t)] + [mu - sigma^2/2]*T)/[sigma*SQRT(T)]
but I see elsewhere (schweser) DD = (LN[F(t)/V] - [mu - sigma^2/2]*T)/[sigma*SQRT(T)]

are they the same? thanks!
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #47
Hi @southeuro They are indeed the same, the latter is shown because that is how Stulz presents it (I don't use it because it defies my intuition ... LN[F(t)/V] is negative for a solvent company ... it inverts successfully due to symmetry of the normal, so that's nice. But the first formula can be directly understood). Thanks,
 
#49
Hi all,
I just want to ask one question....what is the general issue of structural/KMV models or KMV Merton currently?...is it still have problem to calculate default probabilities accurately?...thanks in advance...i really need help right now...
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #50
Hi @hafidz Empirically, I am not current on the literature but when I was consulting KMV EDF approach seemed to meet with success (was considered better than typical ratings, by many). Here are two papers from my library https://www.dropbox.com/sh/889cmens0j51sli/AAAEZ3rH58CWt4FZa0wwsuVpa?dl=0

The key theoretical weakness of structural approaches (Merton, kmv) is due to their dependence on equity valuation (stock price). One of the papers references a tradeoff between stability (which is a virtue of traditional "thru the cycle" ratings based approaches) and accuracy (a supposed virtue of structural approaches which are point-in-time). The advantage of using the stock price is that it's constantly updated and forward-looking but the disadvantage is that it is greatly informed by non-specific (non-company) factors. So these structural approaches tend to produce lower PDs in up markets and higher PDs in down markets and thusly overly inform the EDF with macro factors, whereas actual default is probably more due to company specific issues. I hope that's helpful.
 
#51
thanks Mr. David Harper..i'm very appreciate it...for your information, I have read some literature even in 2011...found that the prediction of KMV model still reflect to the environment factor..as what you said...what kind of company specific issue,Mr. David Harper?....
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #52
@hafidz I just meant that in Merton/KMV (structural models), the PDF (or EDF) is greatly informed by the company's stock price; e.g., if the stock plummets, the EDF goes up. And that is the intended virtue, presumably the forward-looking stock market identifies the problem faster. But, as a holder of stocks, it's clear to me that most of the short-term movement is due to the company's sector and/or the overall market in general (issues which are not company specific). So, under these models' a company's PD (EDF) can increase due to "guilt by association" with the sector/market, yet meanwhile it's fundamental ability to repay debt has not really changed. In this ways the models can give too much weight to non-company-specific factors (sector, market) and not enough to the company's specific ability to repay. I hope that explains,
 

mshah6490

New Member
Subscriber
#54
Hi @David Harper CFA FRM

I have a decent idea about the intuition as to why N(-d2) is the PD. I am able to understand the intuiton behind ln(V(0)/k) and adding drift to calculate the distance.

I understand this as follows: We calculate how many standard deviations would the expected value have to go down so that it goes below the debt value (strike price). This is basically: (Expected asset value(T))/(asset volatility scaled for time)

We calculate the expected asset value at time T, in very lay man terms, by adding the current equity value and gain in asset value via drift.

However, I am not sure about the intuition behind sigma^2/2 term. Why do we subtract it from mu (asset drift) ? I understand that you would get to the formula if you derive it using ito's lemma and other regular stuff but I am not able to get an intuition of this term.

Would really appreciate if you could explain why we subtract sigma^2 from the asset return drift in the formula. Any relevance of sigma^2/2 over sigma^2 ?

Thanks.!
 
Last edited:

ShaktiRathore

Well-Known Member
Subscriber
#55
Hi,
Merton model for credit risk assumes, Using the firm's equity value to assume the firm's asset value and asset volatility, estimate the probability of default (PD) under an assumption that the firm's asset price will follow a lognormal distribution.Since the asset price follows lognormal distribution which has mean of mu-sigma^2/2 therefore we subtract the sigma^2/2 term from the mean mu(mu here is the mean drift of the log returns of the asset prices.) to get the net drift(mean) for the asset prices of mu-sigma^2/2 .
thanks
 

ShaktiRathore

Well-Known Member
Subscriber
#57
Excellent explanation @ShaktiRathore . It is clear to me now.

I checked the mean of a log-normal distribution is mu+sigma^2/2 (ref:https://en.wikipedia.org/wiki/Log-normal_distribution and http://www.mathworks.com/help/stats/lognstat.html?refresh=true) but I get the idea now.

I am done with both levels of FRM but came across this concept recently and thought that this is a good platform to discuss this :)

Thank you very much.
Hi,
please visit :https://www.bionicturtle.com/forum/threads/lognormal.9289/#post-39992
to clarify more.
thanks
 
#59
Hi,
Can & How we use PD (0r DD) in Merton model to define the rating of a company (in internal credit rating system of a bank) ?
Thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Thread starter #60
Hi @RiskQuant

I don't have realistic information on mapping in the direction of PD --> Rating. Sorry :( It seems like a slightly unusual direction to go from ratio-based PD to ordinal-based level of measurement, the motivation isn't obvious to me (sorry, I am surely ignorant about some usage). My exposure is rather to mapping ratings (A, B, ...) to risk ratings numbers (e.g., 10, 9, ...). The real art and science is validation. Although, analytically, if I were going to do it, there seem like so many ways but I would probably do something simple like take a historical database of companies with the credit ratings and simply calculate the defaults rates (with intervals); e.g., companies in this database that were rated "AA" experienced actual defaults of X% with standard deviation of Y, so we will define an "AA" rating the historically-informed band = X +/- Y*a, with some rule for arbitrating overlaps. I could imagine other methods ; e.g., ranking the PDs and binning the results.

It just seems to me to be an exercise wherein you would either (i) borrow from the credit rating agencies since you are, for some reason, "backing into" (backing down to a lower level of measurement) anyways, so why not utilize their definitions, or (ii), if not, use the opportunity to a priori define the rating categories. I don't know the motivation ...

In case it's helpful, here is Moody's Public Firm Expected Default Frequency (EDF) Credit Measures: Methodology, Performance, and Model Extensions: http://trtl.bz/29iNqp4
This is the updated KMV and, in fact, this document is just an update of KMV's original method document (I did some work for the author of the original KMV document, Peter Crosbie). It discusses the mapping of the DD to EDF, which I realize it's maybe not what you are looking for, but just in case. As I mention in my OP above under Variation #2: KMV (Merton but with two adjustments), the N(-DD) produced by a lognormal asset return assumption is not a reliable PD, so Moody's (KMV) mapped the PD to an EDF (page 13, emphasis mine):
3.4 Moving from DD to EDF
"The Distance-to-Default provides an effective rank ordering statistic to distinguish firms likely to default from those less likely to default. We have verified its effectiveness by observing a strong empirical relationship between DDs and observed default rates: firms with larger DDs are less likely to default. However, one still needs to take a further step to derive PD estimates.

In the basic structural credit risk model DDs are normally distributed as a result of the geometric Brownian motion assumption used to model the dynamics of asset values. However, actual default experience departs significantly from the predictions of normally distributed DDs. For example, when a firm’s DD is greater than 4, a normal distribution predicts that default will occur 6 in 100,000 times. Given that the median DD of the entire sample of firms in the EDF dataset is not far from 4, this would lead to about one half of actual firms being essentially default risk-free. This is highly improbable.

Instead of approximating the distribution of DDs with a standard parametric distributional function, the EDF model constructs the DD-to-PD mapping based on the empirical relationship (i.e., the relationship evidenced by historical data) between DDs and observed default rates. Moody’s Analytics maintains the industry’s leading default database, with over 8,600 defaults as of the end of 2011. The process for deriving the DD-to-EDF empirical mapping begins with the construction of a calibration sample – large North American corporate firms – for which we have the most reliable default data. It is reliable in the sense that “hidden” defaults – defaults that occurred, but that were neither reported nor observed – are relatively less likely to cause estimation errors. The DD-to-EDF mapping is created by grouping the calibration sample into buckets according to the firms’ DD levels, and fitting a nonlinear function to the relationship between DDs and observed default frequencies for each bucket. A stylized version of the resulting DD-to-EDF mapping is plotted in Figure 8 in green, along with the DD-to-PD mapping (the orange line) implied by a normal distribution of DDs."
 
Last edited:
Top