That's true! Although ideally, the most predictive models will decrease the odds of a type II error (and correctly picking the winning team) while decreasing the odds of a type I error (incorrectly picking the wrong team). That is, a perfect model will correctly identify the winning team 100% of the team, with a 0% chance of an error. (Of course) that isn't possible in any reality, but then the question becomes how close can we get?
So in other words, if I say that Team A has a 52% chance of winning a series (and they do), I should get less credit than if I say that Team A has a 65% chance of winning a series (and they do).
I'll also clarify that this type of model - the SRS algorithm - is in the public domain, and so I can't take credit. In particular, there are multiple adjustments that can be made to improve the performance of the model, and I post these to engender a bit of interest in the sort of speaking that "I bet I can do better, and I'm going to try".
(My main use for the SRS algorithm - aside from promoting competition here - is that it's a pretty reasonable retrospective team strength, and so I use to it measure goaltender schedules and similar. For instance, if a team is rated as +0.40 goals/game, that may not be as predictive for future events as it could be, but it's very representative of what they've already accomplished.)