Regular readers know that I’m not a huge fan of yards per carry as a metric to evaluate running backs, or even rushing attacks. Given the limited numbers of metrics available, sometimes it is a useful measure, but we also much caution ourselves against relying on it too often. Today’s post is another example of that.
I looked at all rushing attacks since 2002, and calculated the yards per carry gained by each team’s top running back and second running back (excluding pure fullbacks and situations where the second running back had fewer than 50 carries), as measured by carries. If yards per carry was the best way to evaluate running backs, and coaches wanted to play their best players most frequently (and, of course, coaches were able to identify their best players), then RB1s should be better at RB2s at yards per carry.
Last year, Bears rookie Jordan Howard averaged 5.2 yards per carry on 252 carries, while backup Jeremy Langford (who actually opened the season as the starter) gained just 3.2 yards per carry on 62 carries. That’s a piece of evidence that YPC is useful: Howard was much better than Langford at YPC, and he gained way more carries. But that +2.0 discrepancy was the largest in football last year: this is an outlier, not a typical example.
How about an outlier in the other direction? The Jets top running back last year was Matt Forte, who started 13 games and handled 218 carries; he averaged 3.7 yards per rush. Meanwhile, backup Bilal Powell averaged 5.5 yards per carry on 131 carries. This might mean that Powell is better than Forte, but at least last year, the Jets didn’t seem to think so — or maybe thought so too late.
These are two interesting examples because they show some of the drawbacks to actually trying to properly analyze the issue. Howard was the backup, but because he was so much better than Langford, he gets graded as Chicago’s top running back. This biases the study in favor of RB1s: if a running back is producing at a high rate, even if he’s the backup, he may wind up leading his team in rushes that year (thanks to earning more carries in the second half of the season), which means RB1s in generally will appear to have higher yards per carry than RB2s. In that way, it becomes a self-fulfilling prophecy.
On the other hand, look at the Jets example. I think we would all agree that Powell was the Jets best running back last season. He actually finished with more yards from scrimmage than Forte (thanks to playing in two more games), and nearly outrushed him, too. Forte was the (moderately) high priced veteran free agent signing, while Powell was intended to be the backup. By the end of the year, it was clear Powell was the Jets best running back (if not best player), but he didn’t have enough carries to overtake Forte for RB1 status. So in some ways, this study may not properly identify a team’s true top running back and backup running back, if we only classify those players by carries.
So there is some issues with this on a case-by-case basis. That’s why the best thing to do is to aggregate the data. The graph below shows the average yards per rush gained by the average team’s RB1 and average team’s RB2 in each year since 2002:
This is by no means an exhaustive study, but the results are still pretty convincing. Since ’02, RB1s have averaged 4.26 yards per rush, while RB2s have averaged 4.17 yards per rush. On average, the RB2 has recorded just 46% as many carries as the RB1, while averaging 98% as many yards per carry. If yards per rush was a good way to evaluate running backs, shouldn’t the best running backs have much higher YPC averages than their backups?
Again, this is by no means an exhaustive study, and there are other issues to consider (how much YPC may — or may not — decline due to workload in game, for example). That said, I suspect if we had a much better measure of RB quality to use, that the difference between RB1 and RB2 production would be much more pronounced.
What do you think? What study would you run if you had all the necessary data?