Hi,Excellent explanation @ShaktiRathore . It is clear to me now.
I checked the mean of a log-normal distribution is mu+sigma^2/2 (ref:https://en.wikipedia.org/wiki/Log-normal_distribution and http://www.mathworks.com/help/stats/lognstat.html?refresh=true) but I get the idea now.
I am done with both levels of FRM but came across this concept recently and thought that this is a good platform to discuss this
Thank you very much.
3.4 Moving from DD to EDF
"The Distance-to-Default provides an effective rank ordering statistic to distinguish firms likely to default from those less likely to default. We have verified its effectiveness by observing a strong empirical relationship between DDs and observed default rates: firms with larger DDs are less likely to default. However, one still needs to take a further step to derive PD estimates.
In the basic structural credit risk model DDs are normally distributed as a result of the geometric Brownian motion assumption used to model the dynamics of asset values. However, actual default experience departs significantly from the predictions of normally distributed DDs. For example, when a firm’s DD is greater than 4, a normal distribution predicts that default will occur 6 in 100,000 times. Given that the median DD of the entire sample of firms in the EDF dataset is not far from 4, this would lead to about one half of actual firms being essentially default risk-free. This is highly improbable.
Instead of approximating the distribution of DDs with a standard parametric distributional function, the EDF model constructs the DD-to-PD mapping based on the empirical relationship (i.e., the relationship evidenced by historical data) between DDs and observed default rates. Moody’s Analytics maintains the industry’s leading default database, with over 8,600 defaults as of the end of 2011. The process for deriving the DD-to-EDF empirical mapping begins with the construction of a calibration sample – large North American corporate firms – for which we have the most reliable default data. It is reliable in the sense that “hidden” defaults – defaults that occurred, but that were neither reported nor observed – are relatively less likely to cause estimation errors. The DD-to-EDF mapping is created by grouping the calibration sample into buckets according to the firms’ DD levels, and fitting a nonlinear function to the relationship between DDs and observed default frequencies for each bucket. A stylized version of the resulting DD-to-EDF mapping is plotted in Figure 8 in green, along with the DD-to-PD mapping (the orange line) implied by a normal distribution of DDs."