Why Normalisation?
In competitive exams, different groups of candidates (called slots) often get question papers that are not identical. Even though equal care is taken, one slot may turn out a little easier while another may feel a little harder. If raw marks alone were compared, candidates in one slot could be unfairly advantaged.
To avoid this, the Staff Selection Commission applies normalisation. Think of it like running a race: some runners may be given a flat track, while others get a hilly track. Simply comparing finishing times would be unfair, because the effort required is not the same. Normalisation is the adjustment that makes performances from different tracks comparable.
The Old Method (Z-score)
Until now, the Commission used the z-score method. Under this system, each candidate’s performance was compared to the average score in his or her slot, and then expressed in terms of steps above or below that average.
These “steps” are technically called standard deviations. They measure how scattered or spread out the scores are in that slot. For example, being one step above average means your score was higher than the average by the size of one standard deviation.
If two candidates, even from different slots, were the same number of steps above their slot’s average, they were treated as nearly equal.
The Problem with this Method
This approach assumed that scores were spread out neatly and evenly on both sides of the average. In real exam data, this is rarely the case:
- Sometimes the top scores are crowded: many candidates bunch close together in the 80–100 range.
- In other slots, very few reach the top, while a large number remain at very low marks. These low scores stretch the spread on one side and make the picture lopsided.
- In such cases, one step above average in a crowded top group does not represent the same standing as one step above average where almost nobody else is present.
Example: Asha and Rahul
Let us consider two candidates:
- Asha wrote an easier slot. Many candidates in her slot scored in the 90s, so the top end was crowded. With 90 marks, she was ahead of about 85% of her slot.
- Rahul wrote a harder slot. Very few candidates crossed into the 90s, so the top was sparse. With the same 90 marks, he was ahead of about 99% of his slot.
Under the earlier z-score method, both Asha and Rahul appeared nearly equal, because each was about one and a half steps above the average in their slot. But clearly, Rahul’s achievement was rarer and deserved to be recognised as such.
The New Method (Equipercentile)
To correct this, the Commission has adopted the equipercentile method. Instead of only asking how many steps above or below the average a candidate is, the new method looks directly at the candidate’s position among peers.
It asks a straightforward question: “How many people are behind you?”
- Asha’s 90 is recognised as roughly the 85th percentile: she performed better than 85% of her slot.
- Rahul’s 90 is recognised as roughly the 99th percentile: he performed better than 99% of his slot.
When scores are then mapped across slots, candidates at the same percentile level are treated equally. In other words, if two candidates are ahead of the same proportion of test-takers, they will always receive the same scaled score, no matter which slot they attempted.
The Benefit
The equipercentile method ensures fairness across easy and hard slots. It does not rely on the assumption that scores are always spread symmetrically or evenly. Instead, it reflects the true standing of each candidate among all test-takers. In short, instead of averages, the entire distributions are considered.
In summary, just as adjusting race times makes flat-track and hilly-track runs comparable, equipercentile normalisation makes marks from different slots comparable. It ensures that every candidate’s score is a fair and accurate recognition of performance.
What it means to the students?
The students who scored high in the tougher slot will get a fairer deal now.




Login
