It's a spectrum. On the one side, adv stats or "the number". On the other side, the "eye test". Both have to agree and be in balance. If you get too far off to one side, you're missing something. All about balance.
I think this is close, but I think the key is this: most of the time, analytics (I refuse to call them "advanced stats") and the eye test agree. It's when they don't agree that things get interesting. The problem, as I see it, is that far too many people want to just assume that either analytics or the eye test are always correct in those circumstances, and I don't think that's true. What is true is that you need to take a closer look to figure out
why they disagree.
Analytics had a big win early on with Corsi and Fenwick, because they determined that those correlated to winning more than just about any of the "standard" stats. They also determined that hits were inversely correlated to winning (as in, more hits tended to mean more losses). The people who did those early analytics looked into
why, and realized that those stats ended up being indicators of puck possession. Teams that possessed the puck tended to attempt more shots, and teams that were hitting tended to not have the puck. And teams that possessed the puck more generally ended up winning more.
The problem is that people then took "CF% leads to winning" which was not really the right answer. The real answer was that good teams ended up possessing the puck, and thus, good teams tended to have better CF% and FF%. Basically, a whole lot of people decided that correlation was causation, and mistook what it was actually telling us - which was basically that sometimes good teams would have a spell of bad luck, lose early games, and then eventually the fact that they were good would mean that when their luck improved, they would start racking up the wins.
I feel that all the "player cards" and expected goals, and what-not are often missing the point. If you've got an "expected goals" model that almost always expects more goals than are scored, then your model isn't really doing a good job of evaluating when goals should be expected, after all.
If you've got "player cards" that can end up saying something completely different about a player when they move from one team to another, then maybe your player cards are picking up way too many team effects, rather than individual player characteristics.
Sometimes analytics can reveal that a player is better than they look by the eye test, and it's because we're not looking for the right thing. But sometimes, it means that the analytics are what are not looking at the right thing, instead. So you have to actually do some work to figure out which case it is, when they disagree.
(Someday, I'll use Tyler Kennedy as an interesting case study on this, because I think there are things that both the analytics and eye test backers can learn from it!
![laugh :laugh: :laugh:](/styles/default/xenforo/smilies/laff.gif)
)