Hockey Outsider
Registered User
- Jan 16, 2005
- 9,499
- 15,815
The purpose of this thread is to summarize "Vs X", an adjusted scoring system commonly used by the History of Hockey sub-forum.
As far as I know, longtime poster BM67 developed the earliest version of the system, wherein each player's stats were compared to the 2nd highest scorer in a given year. The reasoning was sometimes the leading scorer would be an outlier, and other times they wouldn't separate themselves far from the pack - but the runner-up was likely to be more consistent from year to year. This method didn't work because the key assumption (the runner-up being a consistent benchmark each year) is false - but instead of fiddling around with a number of theoretical variables, BM67's system made everyone realize that comparing a player to the league's leaders was a surprisingly elegant and effective system.
Another longtime poster, Sturminator, refined the methodology. He proposed that our starting point should be using the runner-up as our benchmark - but this can be modified if certain conditions are met. I'm not going to re-post Sturm's analysis - it was a long, well-written post, and I have nothing substantial to add - but you can read about it at http://hfboards.mandatory.com/showpost.php?p=60570205&postcount=1.
The following posts outline the results, and the benchmarks for each year.
Benefits
Drawbacks
Notes
As far as I know, longtime poster BM67 developed the earliest version of the system, wherein each player's stats were compared to the 2nd highest scorer in a given year. The reasoning was sometimes the leading scorer would be an outlier, and other times they wouldn't separate themselves far from the pack - but the runner-up was likely to be more consistent from year to year. This method didn't work because the key assumption (the runner-up being a consistent benchmark each year) is false - but instead of fiddling around with a number of theoretical variables, BM67's system made everyone realize that comparing a player to the league's leaders was a surprisingly elegant and effective system.
Another longtime poster, Sturminator, refined the methodology. He proposed that our starting point should be using the runner-up as our benchmark - but this can be modified if certain conditions are met. I'm not going to re-post Sturm's analysis - it was a long, well-written post, and I have nothing substantial to add - but you can read about it at http://hfboards.mandatory.com/showpost.php?p=60570205&postcount=1.
The following posts outline the results, and the benchmarks for each year.
Benefits
- Ultimately, performance relative to an "adjusted" league leader is captured. Since offense in hockey is relative - a goal today is worth more than a goal in 1982 - the approach is appropriate. Those who have reviewed the results of other adjusted scoring metrics know that changes in league-wide scoring levels don't directly influence the output of the league's top scorers (1993 is a good example - it wasn't that much higher-scoring than 1992 or 1994, but the distribution of the scoring was unique that year). Since the benchmark is the league leader, rather than a theoretical set of variables, the actual results are smoothed out so that the leader boards, excluding any outliers, are relatively consistent from year to year.
- As a result of the previous point, each season counts equally. A dominant season in 1927 counts just as much as a dominant season in 2017. (Of course, there are far more players in the NHL in 2017 than in, say, 1967, so there are far more players with good and great seasons. But I don't see why an elite performance from 2017 would be inherently better than an elite performance from 1967, so the leaders of both seasons would have similar results).
- Performance, rather than a player's name, is what ultimately counts. Gretzky is obviously treated as an outlier throughout the 1980s - but in other seasons, like 1997 as an example, he`s not treated as an outlier just because his name is Gretzky, so other players aren`t given an artificial boost.
- Flexibility. We can look at the top scorers over X number of years. We can make those consecutive or not. We can look at actual results or per-game result. The data can be sliced and diced many different ways.
Drawbacks
- This metric focuses on regular season offense only. Performance in the playoffs, defensive acumen, physicality, etc., are all ignored.
- The actual process for calculating the benchmark each year can be complicated. I know Sturm's system can seem arbitrary or needlessly complex - but I also think it's the best system we have, and each adjustment exists for a reason.
- Time-consuming to calculate. I have everything set up in Excel so it's (relatively) easy for me to run the numbers. Anyone attempting to do this manually is probably in for hours (or days) of painful effort.
- The system isn't additive. That is, adding up goals and assists, either for a single season or over the course of a player's career, won't add to their total points. That's because the yearly benchmarks for goals, assists, and points are calculated separately. The majority of the time I think looking at points is ideal, but the system can be useful in isolating goal-scoring or playmaking if desired.
Notes
- Throughout this thread, I have data going back to 1926-27 only.
- Initially there were "weightings" for different seasons (ie a player's best season was given a weighting of A, second-best was given a weighting of B, etc). Ultimately the consensus emerged that this was unnecessary - it added needless complexity for minimal informational value. Throughout this thread, no weightings are used.
- I'll end this post the way Sturminator ended his - Comments, criticisms, fact-checking and personal attacks are all welcome.
Last edited: