Understanding University Rankings
In this modern age of technology and whatnot, people around the world now have access to a facility our ancestors were denied — the university rankings. Whether your choice of publication is the US News and World Report or the Times Higher Education Supplement, you're bound to find rankings of educational institutions which pinpoint the position of an institution compared to its peers.
These rankings have become religious for many, with universities revelling in being declared first (or 192nd) in the world or the country. Students and parents peruse the rankings for universities to apply to.
But should we take these rankings with a grain of salt? Or should it be a pinch? Perhaps a rock? How accurate are these rankings?
Some people have attacked the rankings as lacking credibility. To be sure, gigantic mistakes are often made (one Asian university was given huge bonus points by one ranking for supposedly having many foreign students, but a check revealed that they were all citizens of that country); sometimes, the methodology used is questionable.
But that doesn't mean the rankings are completely unreliable. We just have to interpret them carefully and exercise good judgement.
Some people believe the rankings can do no wrong — that the university ranked 100th must be better than that ranked 110th, or even 101st. Some students decide where to apply based not on whether the campus, student life, and academic environment suit them, but whether the university is in the top X institutions.
I wonder if these people have ever read of opinion surveys. Typically, when such a survey is carried out, it will be reported with a margin of error, such as +/-3%. This basically means that the closer the difference between different percentages are to the margin of error, the less likely it is that the survey gives us meaningful information.
Let's look at an example. An opinion survey is conducted comparing support for politician A and politician B. The margin of error is +/-5%; 45% of the respondents support A, and 42% support B. The difference is 3%, which is within this margin of error, and thus the result is a statistical tie — we cannot say whether A or B has more support. If the difference exceeds the margin of error, e.g. 6%, then we can confidently say that the winner is so-and-so, but not with full confidence. The greater the difference is, then the more confident we can be.
The reason for this lies in statistics. Whenever we look at an abstraction of data, such as through a sample (rather than looking at the entire population), we cannot be sure that this abstraction accurately represents reality. Statisticians have come up with ways to calculate how sure or unsure we can be — how big the margin of error is.
With this in mind, how can we possibly say with any certainty that, say, the 100th and 101st institutions in a ranking are ranked in the correct order? Making that assertion is tantamount to claiming that the margin of error is smaller than one rank.
I have never seen a university ranking which presents a margin of error, probably for marketing reasons (nobody wants to be told "Uh, we're not quite sure if this is correct..."). And I'm not sure if it would be intellectually honest to present a precise margin of error in the first place, because considering the dozens of variables that these rankings are trying to boil down into one number — the rank — it's impossible to say that the difference between any certain university is such-and-such.
Thus, we must judge for ourselves when we look at university rankings, and indeed, almost any other ranking. We would not say that Manchester United is indisputably better than Arsenal on the basis of MU crushing the Gunners 2-1 in a single match, so why do we say that Cambridge is indisputably better than Oxford on the basis of a one or two placing difference in the university tables?