What's new

Percentage of Correct Answers to pass

#3
Just my observation in many other internet forums or sources.

They said that, the FRM exam (both part 1 and 2) passing score is variable. Firstly you take the average score of the top 5% quartile candidates. Then you take a 75% discount on it. The resulting figure is the passing score.

So for example, if there're 1000 candidates and the top 50 candidates got 90 marks on average, then the passing score is 90 x 75% = 67.5 marks.
So in this example, the worst case is if the top 50 candidates all got 100 marks. Then the passing score will be 75 marks.
But that also means if you can get 75% of your answers correct, then you're sure that you can pass the exam.

See if the comment helps anyway.
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
#5
I don't exactly agree with chiyui about the methodology, but I think it is essentially similar (some % of an anchor that is based on top 5%)

I have high confidence that the pass methodology is based on a ratio of an anchor to the Top 5%. However, specifically:
  • I am currently unclear on whether the anchor is the average of the top 5.0%; or simply the score of the candidate who fell on the 5.0% quantile. Note the analog to ES and VaR! if there are 1,000 candidates, the first method produces an "anchor" = average (top 50 scores); note as a conditional mean, this is analogous to ES; while the second method returns an "anchor" = the candidate who happens to earn the 51st-highest score (as a quantile, note this is analogous to VaR)
  • I do not think percentage multiplier (e.g., 75%) is either constant or even ex ante specified: I think the ratio is ex post calibrated to manage an overall outcome vis-a-vis perceived difficulty. I thinks this value is X, in other words. FWIW, thanks,
 
#6
I don't exactly agree with chiyui about the methodology, but I think it is essentially similar (some % of an anchor that is based on top 5%)

I have highly confident that the pass methodology is based on a ratio of an anchor to the Top 5%. However, specifically:
  • I am currently unclear on whether the anchor is the average of the top 5.0%; or simply the score of the candidate who fell on the 5.0% quantile. Note the analog to ES and VaR! if there are 1,000 candidates, the first method produces an "anchor" = average (top 50 scores); note as a conditional mean, this is analogous to ES; while the second method returns an "anchor" = the candidate who happens to earn the 51st-highest score (as a quantile, note this is analogous to VaR)
  • I do not think percentage multiplier (e.g., 75%) is either constant or even ex ante specified: I think the ratio is ex post calibrated to manage an overall outcome vis-a-vis perceived difficulty. I thinks this value is X, in other words. FWIW, thanks,
OIC, so may I know more about the meaning of this? Thanks a lot about your comments David.
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
#7
It is my understanding that GARP prefers to calibrate the ratio X, after the fact (after seeing the distribution), to account for an exam that is more difficult than expected; e.g., if a certain exam is overall more difficult than expected, this gives them flexibility to "grade on a curve" that is more accommodating by lowering the ratio a bit. And, unlike say the CFA Level I which is more predictable, I actually think it makes sense for the FRM given the high rate of content churn: it would be difficult to write an exam that is consistently at the same exact overall level of difficulty, thanks,
 
#8
Hi David,
This is very interesting.
However, my understanding was difficulty of exam was already reflected in (average) of the top 5% score;
So, applying "floating" X means that they incorporate in X all other "secondary" considerations (limiting the number of charterholders, regulating supply/demand, etc.
Is it right?
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
#9
Hi NewComer, before i go too far out on a limb: I cannot know GARP will be applying this methodology in May 2013, I spoke with GARP about their method in 2011 and again in late 2012, I have not even tried to ask them about it for the May 2013 exam. I cannot assume the 2011-2012 methodology persists ... but okay ...

... my understanding of the floating X was for one simple reason (I don't know about other factors like supply/demand; I don't even know that it has anything to do with managing a pass rate, although it is interesting that there is a fairly consistent pass rate, sans one outlier): due to the rapid syllabus changes, I had understood the floating X to be a "safety valve" in case the exam was tougher than expected, as evidenced by the distribution, such that the passing grade could be modestly lowered to compensate for a difficult exam (i.e., more difficult than intended). This sort of an outcome might not be evidenced by the top 5% (this group might be excellent under any test; in distributional terms, the upper tail may not be sufficiently descriptive-yes?). For example, maybe there is a target X = 70% or X = 75%, but maybe the exam is experienced as abnormally, unintentionally difficult such that 70%*top 5% [ES |VaR] admits too few candidates so it can be lowered. At some point, I got the impression the float's purpose was in this error direction (accepting additional candidates), not the other (i.e., not to tighten the noose). I doubt it totally floats, but it would make sense to me if there were a range given the way the syllabus changes so much ... it cannot be overstated: it would be almost impossible to write a sequence of exams with identical difficulty levels, over time, when the syllabus changes at its current rate but also (actually, the even bigger reason is... ) in conjunction with a significant degree of over-assignment. That is, the actual exam only tests a small % of the syllabus. Overshadowing even the rate of change is the exam sampling variation. (I don't agree with the rapid syllabus change nor with what i would call the "over-assignment" of AIMs, i think the combination of both is detrimental and counterproductive)

... so every year, naturally candidates would like to understand how many corrects might constitute a pass rate, but so far as i can tell, there has never been any evidentiary basis for throwing out any numbers at all. So far, it looks to be "grading on a curve," and if you like, to your point, the curve further has even two unknown parameters. Thanks,
 
#10
Thanks a lot David!
I always wanted to learn more about psychometrics (?); unfortunately don't have time...

Don't you have, by chance, any links?

Thank you once more.
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
#11
interesting, i had forgotten the term "psychometrics" but now that you mention it, GARP did conduct (via a hired outside firm) a psychometric analysis which they summarily reviewed with training providers, but I don't really know about that field, like you I would like to know more, but i don't have links, sorry. When they reviewed it with us, i can't say I understood what constituted the "psychometric" influence, or frame, really. Thanks,
 
#12
Hi NewComer, before i go too far out on a limb: I cannot know GARP will be applying this methodology in May 2013, I spoke with GARP about their method in 2011 and again in late 2012, I have not even tried to ask them about it for the May 2013 exam. I cannot assume the 2011-2012 methodology persists ... but okay ...

... my understanding of the floating X was for one simple reason (I don't know about other factors like supply/demand; I don't even know that it has anything to do with managing a pass rate, although it is interesting that there is a fairly consistent pass rate, sans one outlier): due to the rapid syllabus changes, I had understood the floating X to be a "safety valve" in case the exam was tougher than expected, as evidenced by the distribution, such that the passing grade could be modestly lowered to compensate for a difficult exam (i.e., more difficult than intended). This sort of an outcome might not be evidenced by the top 5% (this group might be excellent under any test; in distributional terms, the upper tail may not be sufficiently descriptive-yes?). For example, maybe there is a target X = 70% or X = 75%, but maybe the exam is experienced as abnormally, unintentionally difficult such that 70%*top 5% [ES |VaR] admits too few candidates so it can be lowered. At some point, I got the impression the float's purpose was in this error direction (accepting additional candidates), not the other (i.e., not to tighten the noose). I doubt it totally floats, but it would make sense to me if there were a range given the way the syllabus changes so much ... it cannot be overstated: it would be almost impossible to write a sequence of exams with identical difficulty levels, over time, when the syllabus changes at its current rate but also (actually, the even bigger reason is... ) in conjunction with a significant degree of over-assignment. That is, the actual exam only tests a small % of the syllabus. Overshadowing even the rate of change is the exam sampling variation. (I don't agree with the rapid syllabus change nor with what i would call the "over-assignment" of AIMs, i think the combination of both is detrimental and counterproductive)

... so every year, naturally candidates would like to understand how many corrects might constitute a pass rate, but so far as i can tell, there has never been any evidentiary basis for throwing out any numbers at all. So far, it looks to be "grading on a curve," and if you like, to your point, the curve further has even two unknown parameters. Thanks,
So why doesn't GARP keep the exam difficulty level at a more stable state by means of a more careful evaluation of the multiple-choice questions, instead of lowering the threshold in order to push the passing rate to their desired level?
Now that it seems like, they don't care whether the standard is met by the candidates or not but just want to boost up the number of Certified FRMs at a rate they like......
 

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
#13
Chiyui, I don't think i went that far. i think GARP's "constructive defense" of their approach has generally been, along the lines of, this flexibility allows for sourcing questions from the real-world. They lean heavily on the following, I quote from the AIMs (it is not unrelated that the AIMs themselves are not a literal, guaranteed set of exam representations, they are more like recommendations than promises about what will be exactly tested. The frame of the AIMs discourages literal exam accountability in favor of some unpredictability where the ostensible motivation is "real world" ), emphasis mine:
The FRM Exam is a practice oriented examination. Its questions are derived from a combination of theory, as set forth in the readings, and “real-world” work experience. [<<-- this is a key phrase: they have always claimed to source questions from practitioners in addition to the text assignments, a practice that challenges multiple choice stability] Candidates are expected to understand risk management concepts and approaches and how they would apply to a risk manager’s day-to-day activities
So I think they would say the primary goal is not to generate stable multiple choice questions; and in their defense, the exam is not very amenable to traditional test taking techniques (I do mean that in a positive way). It could be argued, i guess, that the exam tries to be fair in the sense that everybody has the same experience, but not exactly fair in the sense that we know exactly what will be asked. I do think this methodology, in my humble opinion, as a byproduct, frustrates the right segment (i.e., those who prefer technique and rote memorization) and there is another segment (learners, practitioners) that it seem to give less pause. That's all devil's advocate, it's not even necessarily my position, it's too far theoretical for me now. Thanks,
 
#14
No that's not your fault. We're just based our speculation about what GARP will do only on what we could see from the superficial evidences...
You really give a valuable comment on what attitude should we take on the FRM exam scoring scheme. It's very appreciating.

That's why I also think the FRM part 2 is difficult becoz many of the questions are out of any "textbook" syllabus I can grasp on (as you said, many of the questions are drafted by practitioners and risk managers from their daily operations).
 
Top