I don’t disagree about the idea of points percentage being a more accurate barometer, but the thing about games in hand is that you still have to win them. The points you’ve already got, you’ve already got. There’s no maybe or probability or likelihood about it.
No, sir:
It is very official that the points is the first determinator for the standings.
I mean, every team starts with the goal of obtaining 164 points, and they have 82 chances for getting those. If one team is ahead another in points and in games, that means that they have made good with their available chances. The other team may have one or more chances more left, but you don't get to skin the bear before killing it.
Yes, the NHL lists points as being the official determinator for standings rather than points %. And it almost never matters because at the end of the season teams have played the same number of games, and points and points % give the same result. Yet, in spite of what the rules
officially say, when a season
did end with an uneven number of games played, the NHL used points % to determine the standings rather than points. Which makes sense, as that was the "fairest" thing to do.
Whenever this comes up I hear the argument that when determining standings you shouldn't "count" points not yet actually earned. I understand how that might
seem to make sense; it may not
seem right to assume that a team would earn points in games they have yet to play. Yet is it right to assume they
wouldn't earn any, which is essentially what you're doing if you rank by raw points when an uneven number of games have been played?
A team with X games-in-hand could earn anywhere from 0 to 2X points if those games were played. If the goal in creating standings is to compare (and eventually reward) how well teams have performed, how do you account for that game-in-hand
potential? The fairest (reasonably simple) method, one which is actually supported by the data, would be to "assume" they'd win at the same rate as they'd done
up to that point. Of course there's no guarantee that's how it would turn out, but there's no better answer other than to actually play the same number of games.
Here's an extreme example. Assume Team A and Team B have played the same 5 teams but Team A has played each three times.
Team A: 5-9-1, 11 pts , 0.37 p%
Team B: 5-0-0, 10 pts, 1.00 p%
If the objective is to rank actual performance at that point in time, which team should be ranked higher? Let's say that was a complete season for Team A, Team B had those 10 games left, and they were competing for the last playoff spot. Which would you rather be? Would anyone really take Team A because they had those 11 points locked in? Yes, this is a ridiculous example but it does illustrate the problem with raw points.
As AD points out above, during the season it doesn't really matter since teams almost always end up playing the same number of games. It is kind of a sore point for me, because as a statistician (MS in stats, Biostatistician for a number of years), when I want to check out how teams have actually performed at any point in time, I can't help but manually try to make an adjustment for differences in games played, and if all I have are points that's kind of a pain in the ass.
Again, I'm aware of what the rules state, but I'm also aware that when it actually mattered and there was a difference between the two, the NHL actually used point %. Seems like they need to clean that up? Not directly related, but when the AHL had to use an unbalanced schedule (which is what the NHL standings are during the season) they used points %.