Let's stick with the Bobby Hull example. We'll compare 1966 and 1969. Right now, they're rated virtually even (there's 1.4% difference), with 1969 slightly ahead. (EDIT: I see you said that you`d correct this in post 52. Now you`d have 1966 slightly ahead, by 1.5%).
In 1966, Hull scored 54 goals, and in 1969, he scored 58 goals. (Note that he actually scored more goals per game in 1966, which you say is the basis of the method).
In 1966, Hull was the only player to score 40+ goals in the NHL. In 1969, there were six 40-goal scorers.
In 1966, there were 6 players who scored 30 goals (nobody aside from Hull scored more than 32). In 1969, there were 19 players with 30 goals, and twelve of them scored more than 32.
In 1966, there were 28 players with 20 goals. In 1969, there were 52.
With all due respect, I don't see how a reasonable system can suggest that Hull's 1969 season is even close to his 1966 campaign. Hull was very good in 1969, but there were many players who were close to him. In 1966, he was on a different level compared to every other goal-scorer in the NHL.
My guess? In 1969, Hull's score is inflated because the NHL recently doubled in size. The players who weren't good enough to hold a regular roster spot during the Original Six era were now playing full-time on expansion teams. In 1969, his score gets inflated because the 80% you're comparing him to now consists of a diluted talent pool.
I have data to back that up. In 1966, by my count, 46 players scored 80% of the goals. 17 of those players (37%) were Hall of Famers. In 1969, I believe you needed 99 players to make up 80% of league-wide goal-scoring. I count 19 Hall of Famers (19%). Thus, Hull`s 1969 campaign is being artificially inflated he played in an era with more concentrated talent. This would explain why the best seasons of Hull, Howe, Richard, etc are penalized under this method.