Lets say you are in lower manhattan, NYC, and you need to get to LaGuardia for a flight in 30 minutes. On average, cab drivers in NY make it to LaGuardia in less than 30 minutes 27.5% of the time. Drivers percentages vary from 22.5% to 32.5%.

You have two cab drivers in front of you. Driver (A) has been driving for 10 years (5000 trips), and makes the trip in 30 minutes 30% of the time. Driver (B) just started driving, has done the trip 5 times, and made it in time twice. Which cab should you take?

The answer, plus a basic lesson in statistics, after the break.

The answer is that you should take the driver who has performed better than average for 10 years. The odds are that driver B got lucky, and will not continue to demonstrate that success. Notice that this example can be turned into batting averages, give or take. If Carlos Gomez gets two hits in his first two at bats, does that make him a career .400 hitter? Yes, for now. Do you expect him to hit at .400 for the rest of his career?

Terms like small sample size, regress toward to the mean get thrown around here a lot (I'm really just quoting Ubelmann). But, at this site, and more so around the internet/blogosphere/tv/newspaper (ahem Hartman!), people don't really seem to understand it. I tried to find a good link to an explanation of credibilty, but failed, so I'll draw up the "lesson plan" myself. I hope this makes sense, I'm not much of a teacher.

Alright, lets say you are flipping a fair coin (50% heads). If you flip the coin once, you will get either 1 (heads) or 0 (tails). If you get a 1, do you expect to always get heads in the future? But, there is a 50% chance after 1 flip that you will end up with a "career" average of 1. If you flip it twice, there is a 25% chance that you will have a career average of 1, even though the "true" average is .5. If you flip it 10 times, there is a 0.1% chance that you still have a career average of 1. If you flip it 10 times, there is an 38% chance that you have a career average of at least .6. However, if you go to 100 flips, the probability of having a career average of at least .6 drops to 3%.

Hopefully this makes sense so far. The point being, that flipping a coin is a random process, and sometimes fair coins will show averages that aren't equal to the true probability. As the amount of experience rises (flips), the probability of remaining just as far away from the true mean decreases. Remember that for all of the cases mentioned above, no matter how many flips and how many heads, you would always have a 50% chance of heads in future flips.

Lets make it a bit more complicated. You have two coins, one is fair (50% heads), and one is not (60% heads). You have flipped a randomly chosen coin 10 times, and you have more than 6 heads heads. What are the odds of getting a heads on your next flip? This depends on the probability that your coin is the fair, or the weighted coin. Well, the fair coin had a 38% chance, and the weighted coin has a 63% chance of getting at least 6 heads. Since you randomly chose your coin, there is a 62% chance (63/(63+38)) that the coin you have chosen is the weighted coin. Your future EXPECTED flip is 62%*60% + 38%*50% = 56%. Note that this is lower than the average flip thusfar (which is greater than or equal to .6). In fact, it is very close to the mean flip of 55% (average of 50% and 60%).

In reality there is a different set of odds for 6 heads, 7 heads, 8 heads, but using "cumulative" distributions made my job easier. I will probably have my math corrected anyhow, because I'm racing to do this and get back to my work.

The idea is that this extends to any baseball player. Joe Mauer has a career average of something like .320. There is a chance that he is actually a .320 hitter, and has performed exactly as expected, that he is actually a .400 hitter, and has been unlucky, or that he is actually a .200 hitter, and should be expected to regress. The world of people hitting .320 is made of of mostly players who truly hit worse than .320 hitters, some who truly are .320 hitters, and a few that are truly better than .320 hitters. The odds are that any .320 hitter has gotten lucky, to some extent. The fact that he has performed above average does not make him a .320 hitter, but it does increase the probability that he is an above average hitter. However, the odds are less than 50% that he is truly a .320 hitter, because the average hitter is worse than .320.

I hope this makes sense. I'd love to see a good discussion about this, because I think that a lot of smart baseball people on this board don't completely understand the logic behind this. The other factors, like age, injuries, coaching, experience, etc complicate this model, but all have to be considered in aggregate to get fair, accurate projections.

## Connect with Twinkie Town