That's true, but most aren't. Funny enough, I did an extremely lazy model with only
one assumption before the season last year for the North division. I looked at how my myopic model finished versus yours (from your last update before the season started last year).
Teams | MyExpected | Melvin Expected | Actual | MyDiff | Melvin Diff |
Calgary | -4.666666666666 | 16 | -5 | 0.333333333333996 | 21 |
Edmonton | -28.5 | -1 | 29 | 57.5 | 30 |
Montreal | 41.333333333333 | 14 | -7 | 48.333333333333 | 21 |
Ottawa | -28.5 | -18 | -34 | 5.5 | 16 |
Toronto | 34.666666666667 | 3 | 38 | 3.333333333333 | 35 |
Vancouver | -64.333333333334 | -19 | -39 | 25.333333333334 | 20 |
Winnipeg | 50 | 6 | 18 | 32 | 12 |
[TBODY]
[/TBODY]
As you can see, mine predicted 3 of the teams nearly spot on, had much wider errors than you for 3 of the others, and a slightly wider error for Vancouver than you. Whereas yours were pretty consistent in the error magnitudes (relative to mine)... but the errors happened in both directions. Which model was more useful? I think both were junk at the end of the day. We did both project that Vancouver was going to be garbage last year though... so there's that.