I knew you’d avoid answering the question. Once again:
So, now’s that we’ve establish an ironclad truth, can you answer a question regarding said ironclad truth? At what point does a percentage change in percentage switch from “more” to “easily more”:
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
14.5%
14.6154% (maybe Laine just barely hit the threshold)
Feel free to choose a number in between that I didn’t list (e.g. 11.4%, 7.8%), or even less than 1%. For all I know you consider 0.1% to be “easily.”
Btw percentages don't linearly increase like that unless the one we're comparing to remains static(the increase is linear if, say, the total amount of shots in all cases was 100, but if the total amount between each player is different, then percentages don't increase linearly like this). That's because variance is dependent upon the total number of shots so all the percentages are incomparable between one another unless you specifically convert them all so that they assume the same number of shots, and adjust for variance.
In other words, they're not directly comparable.
As for the question, you could, for instance, use 1 standard deviation in relation to the entire sample population as "easily more" but these terms are abstract and subjective and it depends on the dispersion of the dataset.
So you could compare 2 percentages(57/243=23.457%, and 43/178=24.1573%) like:
57/243 = exp^(ln(57/243) = exp^(ln(57)-ln(243)) = exp^(4.043-5.493) = exp^(-1.45) and:
43/178 = exp^(ln(43/178) = exp^(ln(43)-ln(178)) = exp^(3.761-5.182) = exp^(-1.421)
so you could compare them exp^(-1.45 - (-1.421)) = exp^(-1.45+1.421) = exp^(-0.029) = 0.9714, and exp^(0.029) = 1.0294
so those are the direct comparison factors(A 0.9714 times as good as B, or B 1.0294 times as good as A), or you could compare them related to /100 percentages first finding the combined /100 percentage:
(57+43)/(243+178) = x/100
100/421 = x/100
100/421*100 = x
100000/421 = x
23.753 = x
and then:
exp^(ln(23.753)-ln(100)) = exp^(3.1677-4.605) = exp^(-1.4373)
and then: exp^(-1.4373-(-1.45)) = exp^(0.0127) and exp^(-1.4373-(-1.421)) = exp^(-0.0163)
and exp^(0.0127) = 1.01278 and exp^(-0.0163) = 0.9838 for how they related to the version converted to 100 samples.
And what you notice is that 0.0127 and -0.0163 are a different distance away from the point of comparison, which was the converted /100 percentage of their combination. You'll notice that with a lower number of samples, the dispersion(absolute value of the exponentiated number) is greater. This makes sense logically, too, and also does its part to explain the phenomenon of "regression towards the mean." This is because if you have a greater number of shots on goal, then your own participation the total sample population of shots on goal explains more of the league's average shots on goal average.
In other words, this proves that percentages cannot be compared to one another by default. Direct comparisons can only be done if the percentages came from the same number of samples, aka, if the players had the same number of shots on goal. Otherwise, they must all be converted into a format that assumes the same number of shots, such as 100 in my example. And you can use the methodology I used for every sample within the population you're comparing to.
This basically is a manifestation of the principle that with more samples, you can be more confident in the accumulated value—if a player has taken 5000 shots on goal, you can be more confident in the player's shooting% reflecting their true shooting% than if the player had taken 500 shots on goal; assuming the generator(player taking the shots) itself doesn't change along the way.
And then, when you have these numbers, you can also perform outlier detection to predict which players are most likely to regress towards the mean and which players truly deserve their high shooting%, etc.