I've blogged before about the idea of getting rid of unearned runs and hence ERA and simply moving to a RA (run average) for pitchers. This idea is championed by Michael Wolverton of Baseball Prospectus. Recently, an article by Keith Scherer attacking the idea as discussed by Bob Sheehan on BP was posted on Rob Neyer's web site. Scherer says:
"The argument against ERA is straightforward: the distinction between earned and unearned runs (UER) is a false dichotomy caused by a misperception. As Michael Wolverton puts it, 'The main problem with unearned runs isn't errors, it's the notion that the pitcher's job ends whenever an error is made.'
ERA supposedly avoids charging a pitcher with runs that scored through no fault of his own. According to Baseball Prospectus, this metric has it backwards. ERA seeks to blame fielders for unearned runs, but unearned runs, like earned runs, are really caused by pitching failures and not by bad fielding. Wolverton is Baseball Prospectuss chief exponent of the argument. He puts it this way:
'Errors will happen. Good pitchers will minimize the damage caused by them. That is, a good pitcher will allow fewer runners on base before the errors happen (so there aren't runners to score on the errors), and will allow fewer hits and walks after errors happen (so the runners who reached on errors won't score).' "
Scherer goes on to discuss why, in his view, this argument is wrong. His two lines of evidence are that:
1) The difference in errors between teams since 1990 with high and low numbers of unearned runs varies more dramatically than the difference in hits, walks, homeruns, and earned runs
2) A random sample of 10 pitchers showed that 7 of them were within 2 percent of each other in percentage of runs that were unearned (UER%), the average being around 10%
His first line of evidence correlates unearned runs with errors (putting the onus on the fielders) while the second shows little correlation between the percentage of unearned runs and pitchers (taking the onus off the pitchers).
I wanted to look into his second point more closely and so I constructed a study where I selected all of the pitcher seasons since 1960 and calculated their UER%, URA (Unearned run average i.e. the number of unearned runs given up per 9 innings), Team Errors/Game, BABIP (batting average on balls put in play), WHIP (walks+hits per inning pitched), ERA+ (ERA relative to the league with 100 being league average), and K/9.
First, the totals for these 19,185 seasons:
UER% = 10.2%
URA = .450
E/G = .805
ERA = 3.95
K/9 = 5.7
BABIP = 0.313
WHIP = 1.35
So the average pitcher since 1960 gives up just under half an unearned run per game with about 10% of their runs being unearned. This validates Scherer's data point that historically 10% of runs are unearned.
To see whether the UER% or URA varied by quality of pitcher I split the pitchers by ERA+ below and above 100 (> 100 being worse than the league average). Doing so I got the following:
# URA E/G UER% ERA+ ERA K/9 BABIP
ERA+ < 100 7739 .415 .787 .114 80.9 3.19 6.3 .296
ERA+ > 100 9202 .486 .807 .091 122.5 4.83 5.7 .329
The pitchers with lower ERAs (the first row) had a higher UER% and a lower URA. In other words, the better pitchers actually gave up more unearned runs as a percentage of their total runs allowed. A higher UER% makes sense for better pitchers on the theory that the defense plays the primary role since better pitchers give up fewer earned runs by definition but can't control unearned runs to the same extent. The result being that their unearned run percentage will be higher. Although Scherer did not point this out in his article this is a support for his argument. Score one for Scherer. Notice that this occurred even though the teams played slightly better defense behind the better pitchers (.787 errors per game versus .807, a difference which translates to about a half an error per 200 innings pitched).
On the other hand the pitchers with lower ERAs also had a lower URA. A lower URA supports the theory that better pitchers give up fewer unearned runs because they don't allow runners who get on base via errors to score as often and so unearned runs are not that important. Score one for Wolverton.
So which theory is correct? They both are.
To understand why you simply need to think about how unearned runs actually score. It is obvious that even good pitchers can be victimized by bad defense. Consider the case where a pitcher gets the first out on a groundball, gives up a bloop double, strikes out the third batter looking, and then a ground ball goes right through the first baseman's legs scoring the runner from second. Obviously, these kinds of situations are out of the control of the pitcher and regardless of how good the pitcher is, the run will still score. I think these kinds of situations result in a higher UER% for good pitchers than for bad. Conversely, we've all seen innings where the leadoff hitter gets on base via an error, the pitcher strikes out the second hitter, gives up a single to the third hitter putting runners on first and second, the fourth batter flies out and the next batter hits a three-run homer making all the runs unearned. These are the kinds of things that happen to bad pitchers which good pitchers avoid and which tends to drive up the URA for bad pitchers.
The question then is not which of these theories is true, but rather which of them is more important and has more impact.
This can be calculated by looking at the run differences produced by the variation in URA. The difference of .071 runs per 9 innings between low and high ERA pitchers calculates to be about 1.6 runs over the course of 200 innings, or 16%. In other words better pitchers do appear to suppress unearned runs but do so only marginally, saving their teams a handful or fewer runs per year. The majority of unearned runs (the remaining 9 or so over 200 innings) appear to be of the variety that would score anyway.
Now of course proponents of getting rid of unearned runs will point out that the split based on ERA was perhaps not the correct way to look at the data. Perhaps other skills a pitcher has suppress unearned runs to a much greater extent?
To check this out I also split the data by K/9 above and below the league average and WHIP above and below 1.36 (the average for the seasons in the study). I got the following results.
# URA E/G UER% ERA+ ERA K/9 BABIP
K/9 > Lg 9829 .426 .794 .103 96 3.78 7.26 .312
K/9 < Lg 7112 .472 .804 .105 104.1 4.11 4.64 .311
# URA E/G UER% ERA+ ERA K/9 BABIP
WHIP <1.36 7177 .394 .796 .108 85.4 3.27 6.19 .293
WHIP >1.36 9764 .513 .800 .098 117.4 4.75 5.85 .333
In examining these results you'll notice that the differences in UER% and URA are not as great between strikeout and non-strikeout pitchers as between low and high ERA pitchers. However, high strikeout pitchers did give up a lower percentage of their runs as unearned which makes sense if you assume that high strikeout pitchers would suppress unearned runs by not allowing as many runners to advance on outs.
What's interesting, however, is that the low WHIP pitchers have the lowest URA in the study at .394 translating into a savings of over 2.5 runs per 200 innings pitched, a savings of 30%. Common sense says that this is because fewer baserunners means fewer opportunities for errors to produce unearned runs. But even if you increase the split, taking only the top and bottom 5% of pitchers the difference is around 10 unearned runs over the course of 200 innings. So although keeping runners off base makes a difference in giving up unearned runs, once again the difference does not account for the lion's share of the unearned runs given up.
As I mentioned above, both ideas about unearned runs are true. They are a product of the defense and they are magnified when errors are made behind bad pitchers. It's just that they're not magnified to the extent that using unearned runs to assess pitchers would make the assessment more accurate. A good working estimate would be that between 16% and 30% of unearned runs should be assigned to the pitcher and the remainder to the defense.
No comments:
Post a Comment