RESEARCH: Are Disaster innings important and predictable?

Disaster innings can ruin a pitcher’s ERA and his team’s chances of winning. Pitchers try to avoid these blowup innings by limiting the damage once runners reach base. I’m going to examine what is a disaster inning, and then ask the questions: 1) Does limiting disaster innings help a pitcher’s ERA? , and 2) Are disaster innings predictable? The results point to yes for each question.

If a pitcher can spread out the hits and walks over a game, the opposition has fewer scoring chances. On the other hand, a pitcher may be lights-out for all but one inning but then allow five runs once hitters start getting on base.

A reason for these disasters could be that the pitcher is very good from the windup and really struggles from the stretch. Another possibility is the pitcher struggles in the later innings because they don’t have a third pitch or lose velocity. Finally, the pitcher may not be able to control the running games (see Jon Lester). No matter the possible cause, the metric needs to be predictable to really matter. It just can't be a random noise generator.

The first step is to define a disaster inning. I tried several combinations and came up with the following equation:

Disaster Inning (DI) Events = Hits+HR+BB+HBP-GDP-K

Basically, the equation is the events which put runners on base and the events which remove them or keep the ball out of play. Using the equation and the past five seasons of data, I found how often an event total is reached and the average number of runs allowed that inning. Note that we're not using an inning in the traditional way in that three outs need to occur. Instead, it’s any time a pitcher pitches in that inning, independent of the total outs recorded.

A jump in runs scored happens when the disaster event total reaches "2." Just a couple runners lead to one run (0.9) scoring. Overall, in just over 20% of all innings, a pitcher will allow a total of two or more Disaster Events.

The next hurdle to overcome is adjusting the bad inning rate to the pitcher’s ERA and xERA. This needs to be done because good pitchers are going to have fewer bad innings compared to bad pitchers. I not trying to find a metric which measures pitcher quality but rather one that measures if the pitcher can limit bad innings depending on their talent level.

There were several values which could have been adjusted. I went with the approach of adjusting the Disaster Inning% up or down depending on the pitcher’s and league's average ERA. A good pitcher would see their DI% go up and vice versa for bad pitchers.

From there, I compared the adjusted DI% to the pitcher xERA-ERA. For the pitcher with 50 IP in a season, here is the correlation.

While I expected a subpar correlation, the .06 value is tough to swallow. A very small part of a pitcher’s difference between their ERA and xERA can be explained by blowup innings. Some relationship exists but other factors seem to be weighing even more into the difference between ERA and xERA

Even though the relation between ERA-xERA and DI% is small, maybe the ability to limit the big inning is a projectable talent.

To find out if it is predictive I matched back-to-back pitcher seasons of adjusted DI%. With the adjustment, the pitcher’s talent should be factored into DI%. Here are the paired year-to-year adjusted Disaster Inning%.

It’s a decent correlation and way more than I expected. I may be onto something … or maybe not.

As I dug into those pitchers with the ability to limit the Disaster Innings, high strikeout pitcher stood out even after adjusting for ERA. Matt Swartz noticed a few seasons back that the benefits of strikeouts are not linear but increase exponentially as strike rate increases linearly. If a pitcher has a higher strikeout rate (e.g. Max Scherzer), they can prevent big innings compared to pitch-to-contact pitchers (e.g. Kyle Hendricks) who post similar ERAs. For example, here are the 2017 high strikeout starters and their ERAs and xERAs.

The top strikeout starters see their ERA about a half run lower than their xERA.

For 2017 here are the top and bottom-20 pitchers ranked by their ERA-adjusted (aDI%) and the full 2017 numbers are in this Google Doc.


Truthfully, I didn’t expect to find a year-to-year correlation in Disaster Innings. Since I did, it'd be best to figure out a way to incorporate the values.

After mulling over the information for a few days, I think the key to moving forward may first reside in an adjustment to xERA to account for high-strikeout pitchers. I prefer a few simple stats when analyzing players. xERA is a simple concept and correcting the near 0.50 ERA difference between ERA – xERA may alleviate most of the discrepancy. I can’t be sure but making xERA more accurate will help remove some of the high-strikeout bias but it's a nice place to start. Until future studies can be done, I’d give more weight to BHQ's ERA projection instead of the xERA projection for a high-strikeout pitcher.

Allowing an abnormally high number of disaster innings can push a pitcher’s ERA over his xERA. After correcting for differences in pitcher talent, the ability to mitigate disaster innings is a projectable talent. The problem is that high-strikeout pitchers disproportionally display this trait. The effect strikeouts have on xERA is non-linear while the current calculations are linear. This hang-up is tough to overcome.  For now, I’d weigh the ERA of high strikeout pitchers more than their xERA until the error is corrected and disaster innings can be incorporated.

Click here to subscribe

  For more information about the terms used in this article, see our Glossary Primer.