I've had a bunch of long posts that I've ultimately chosen not to post because it feels like the people I'm arguing against will never update their priors. One of them recently for this thread was a breakdown of Kucherov's last two years, showing that the only difference between them is stuff we know is random and not under a player's control (his even strength on-ice shooting percentage and his individual points percentage). Another one was taking that random variance around a true talent level, and showing just how much point totals can vary based on those numbers. It just feels like there is a group that is aggressively just not about the math, because to me all of this is perfectly supported by like a basic variance calculation.
Why do we have more outlier seasons now? Because when you throw 100 player-seasons at a wall instead of 18, you get more opportunities.
People like to have narratives, to think there's something controlling results, but really it's all variance. You have this whole justification of even strength vs power play opportunities, depth scorers, less assists per goal, less teams or more teams, when the reality is that scoring hasn't changed in 80 years. You look at seasons at nearly exact scoring levels, and generally the normal curve of scoring pops out seasons to populate itself. That curve fills out with a much smoother distribution with 100 samples compared to 18 though.
That expanded curve still is a small sample in the scheme of simulating a full season. The era between 11-12 and 15-16 was a stagnant period for scoring, alternating between 218 and 219, prorating 12-13 to 82 games. Each year, the VsX benchmark should have been between 95-96, but that only occurred in 11-12 and 12-13 (prorating again). In 2 of the other 3 years, you had the superstar season above the benchmark (Crosby's 13-14, Kane's 15-16), but without the secondary season that matches the benchmark (like Stamkos' 11-12). The same concept happened earlier, like the 86-87 through 89-90 seasons, 4 straight years of similar scoring (league average was 294, 297, 299, 295 in those years), and you see VsX numbers of 108, 131, 139, and 129. By my numbers, all 4 years should've been in that 130 range, but it overshot in 1 year and undershot in another. It's hard when you're just relying on so few players - if Mario plays 75 games in 86-87 instead of 63, he ends up with about 130 points, and that benchmark would've been correct.
In the end, even having the correct benchmarks really isn't that important in the scheme of things. Does knowing the VsX in 52-53 should've been 73.64 instead of 61 take anything away from Howe's 95 point season? It just means his VsX would've been 128.99 instead of 155.74. Compare that to McDavid's 22-23, where the benchmark was spot on at 113, and McDavid's VsX goes from 135.40 to 135.28. Even then, in those extreme outlier seasons, the difference between 135 and 129 is essentially just 5 points. This year, VsX was set at 120, whereas my numbers have it around 111. For Kucherov, that's the difference between a 120 and a 129.84 VsX score, for MacKinnon 116.67 and 126.23. The absolute difference between 155, 135, 129, and 120 matters not, it's the fact that they're all 20-30% above peak seasons.