Monday, October 5, 2015


In general, Reilly and I don't have any great expectation that the subjects which interest us will be seen as particularly fascinating to other people.  For this reason, we tend to be cautious about what topics we bring up, even if it frustrates us a bit.  No, we probably shouldn't discus our collection of ear wax.  Likewise, our fears about the Nazis who live on the dark side of the moon, is something we tend to keep to ourselves.  We could also discuss our belief that slugs are at the top of the food chain (go ahead, name one person who has survived a slug attack), but we don't mention this very often.  Sometimes, we just suspect that the issues we want to explore might cross certain lines, and we'd prefer to avoid stirring up a kerfuffle.  Today, perhaps foolishly, we decided to pursue one of those unfortunate subjects that draws our interest.

The topic we want to examine this time is PFF (Pro Football Focus).  Now, to be perfectly clear, we generally like PFF (sort of), and appreciate the statistical data they provide to the football geeks of the world.  Being critical of what they do makes us feel a bit uncomfortable, particularly since they are a well established site ($$$), while we rely on the prognostications of a dog (-$$$).  Even if we don't always agree with PFF, at least they're trying to apply clearly stated and measurable standards to their analysis of football, and this matters to us.  Geek on geek crime is not something we want to engage in, particularly since analysis of the NFL still hasn't really emerged as something that the football world has strongly embraced yet.  Still, despite some wariness, there are some concerns we have with PFF, that we felt we should discuss.

For those amongst you who aren't familiar with PFF, they run a site that accumulates data from NFL games, and attempt to analyze what all this raw information supposedly means.  It is an attempt to give us a better understanding of the game, something we feel is rather important, or at least interesting.  Reilly and I frequently agree with their assessments of particular players, and often make references to them.  Admittedly, we are more likely to quote PFF when they agree with us, and ignore them when they don't.  That's just the sort of unreliable assholes we are.  Regardless, the information they compile is greatly appreciated by many of the NFL numbers geeks of the world, as it is a challenging task to assemble such quantities of data, and is beyond the means of individuals such as ourselves.

Where we sometimes run into problems with PFF, is in their analysis of these mountains of data.  Different positions require different sorts of examinations, since productivity for a defensive lineman is obviously different than it would be for a wide receiver.  So, based on these different sets of criteria, PFF assigns "grades" in the areas they feel are relevant to the position in question.  The grades themselves are fairly meaningless on their own, and merely a tool for directly comparing players within a given position group.  These numerical grades, either positive or negative, are also highlighted in either green (good!), or red (FIRE BAD!), to give the casual observer a sense as to whether a particular player is performing at an above average level (or not).  This leads to an incredibly simple way of appraising players, though we suspect it is probably a bit too simple...and frequently a bit idiotic.

These grades, and this sort of analysis, is extremely results oriented.  Getting a sack, is better than not getting a sack.  Catching the ball, is better than not catching the ball.  This is all fairly obvious.  While Reilly and I certainly don't want to downplay the importance of actual results, sometimes things get lost in these sorts of examinations.  Sometimes the results fail to convey why a player was able to perform well.  Sometimes we miss out on the context, which might better explain what is really happening. 

I suppose our primary question/criticism is very simple, though its validity depends on what you believe should be the main goal of people who analyze the NFL.  Do we want to know who produced the best numbers?  Or, do we want to know who the best player is, even if their environment isn't exactly helping them out?  PFF might be able to answer the first question, at least to some degree.  The second question is vastly more complicated, and is the topic we want to take a look at today.

Flip a coin: Mediocrity or Star

Let's consider the subject of offensive lineman.  We ramble a lot about offensive linemen around here, and I think that our fascination stems from how boring a subject this probably is to most people.  Plus, fat guys in tight outfits are kind of funny.

When examining the performance of offensive lineman, PFF's criteria is fairly simple and easy to understand.  The method for grading these players comes mainly from two separate areas, their run blocking grade and their pass blocking grade.  For now, to keep things simple, we're just going to discuss how the pass blocking grade works...or doesn't.

Essentially, PFF simply tallies up the number of total pass attempts that a lineman was on the field for, and calculates what percentage of the time this lineman managed to keep their quarterback from being sacked.  This percentage is referred to as the player's Pass Blocking Efficiency, and superficially it seems to make some sort of sense.  Dead quarterback = bad.  Living quarterback = good.  Refer to PFF's handy red or green color code if you still need further clarification.

Now, let's talk about truth with a capital "T".  While Reilly and I are inclined to believe in the merits of examining NFL players based on their measured athletic ability, and statistical production, there are limitations to how much you want to trust such things.  Very simple statistics can suggest that there is an argument to be made that a player might be pretty good.  They don't necessarily always reveal the complete truth though, and sometimes you need to dig a bit deeper.  Do I really believe that the player who allowed the fewest sacks, is in fact the best pass blocker?  Or, do I think these outcomes can be influenced by numerous complicated factors?

Let's use two players, Ryan Clady and Orlando Franklin, to provide an example of how the value of this sort of data can become a bit murky.

In 2011, Ryan Clady was rated as PFF's 40th ranked offensive tackle (among tackles who played for 50% of their team's total snaps), when it came to pass blocking.  Since there are 32 teams, each with 2 starting tackles per team (for a total of 64...yes, we know you could do the math), that would mean Clady was viewed by PFF as being a somewhat below average tackle in 2011.  Then, in 2012, Ryan Clady was strangely ranked as the league's 4th best tackle (again, when compared to tackles who played for 50% of their team's total snaps), when it came to pass blocking.  That's a fairly remarkable rise in the rankings, going from the 40th slot, to the 4th, in just a year's time.  What exactly happened here?

Now, let's look at Orlando Franklin, who played at the opposite tackle position from Clady, for the Denver Broncos.  In 2011, Franklin's pass blocking had him ranked as PFF's 41st rated offensive tackle, just one slot shy of where we found his teammate Ryan Clady in that year.  Just like with Ryan Clady, this rating would seem to suggest that Franklin performed like a somewhat below average tackle in 2011.  Then, in the following year, 2012, Franklin's pass blocking performance had him ranked as PFF's 8th rated offensive tackle.  Again, Franklin's rating for this year was just a tad behind where we found Ryan Clady had surged to, and near the top of the league.  That all seems a bit peculiar, doesn't it? 

Though some people may disagree, Reilly and I tend to think that a player is what he is.  The "talent" of a player should be somewhat fixed.  Though experience may lead to improvement, and injury can make one decline, it seems unlikely that what a player is doing from year to year would radically change, even if the outcome from his efforts might vary significantly.  Yet, PFF seems to be suggesting that both of these tackles, playing on the same team at the same time, went from performing at a below average level to suddenly being among the top players at their position, at the same time, over the course of just one year.

What exactly is PFF telling us about these players, and is there any way to figure out why there opinion changed so radically?  Is PFF telling us anything about the quality of these players, or merely pointing towards the circumstances they might have struggled with? 

The sleeper must awaken!

Of course, there is a pretty obvious answer as to why PFF's opinion of these players shifted so dramatically in just one year.  Something very significant happened in 2012 for the Denver Broncos, which likely benefited every player on the team's offense.  This was the arrival of that scrappy, unknown quarterback Peyton Manning, who came to replace the heaven-sent Tim Tebow.  Ryan Clady and Orlando Franklin probably didn't change what they were doing at all, from 2011 to 2012.  It seems more likely that it was the perception of their performance that changed, now that they were protecting a competent different quarterback.

Let's consider what the sack rate has been for quarterbacks in Denver, both before and after Manning's arrival.  Below, we will list these sack rates (the percentage of passing plays by the team that resulted in a sack), along with the name of the team's primary quarterback in each year.  We're also including the rate at which the team's quarterback was hurried, even though we personally place much less value on this, and think it is a statistic of questionable worth.


        Year       Sack %      Hurry %            Primary QB
2007 5.15 21.74                     J.Cutler
2008 2.05 21.77                     J.Cutler
2009 5.58 23.11                   K.Orton
2010 5.69 25.68                   K.Orton
2011 7.14 34.96                  T. Tebow
2012 3.13 11.39                P. Manning
2013 2.31 17.18                P. Manning
2014 2.09 14.82                P. Manning

So, in the years from 2007 to 2011, Broncos' quarterbacks were getting sacked on average about 5.12% of the time.  Those would arguably be fairly average results for an NFL team.  Only Jay Cutler's 2008 season was a significant improvement in this area (2.05%), and this fluky season probably contributed a great deal towards people's inflated opinion of him, and fed into to the Bears' eagerness to trade for Cutler.  Tebow's 2011 season, was clearly fairly horrible, with a 7.14% sack rate.  From 2012 through the 2014 season, the Manning led Broncos had a sack rate that averaged 2.51%, or about half the average rate of sacks prior to his arrival, or 2.84 times better than it was in Tebow's 2011 season.

That sort of shift could clearly influence people's opinion of how the Broncos offensive line was performing, but how likely is it that a quarterback can really have that sort of effect on a team's sack rate?  Well, let's take a look at what happened to the Indianapolis Colts, both before and after Manning's departure.


        Year       Sack %      Hurry %            Primary QB
2007 2.99 29.76                P. Manning
2008 2.33 25.29                P. Manning
2009 1.79 19.46                P. Manning
2010 2.16 22.09                P. Manning
2011 5.82 21.16       Painter/Orlovsky
2012 5.13 29.93                    A. Luck
2013 5.05 25.25                    A. Luck
2014 3.36 22.69                    A. Luck

In the years from 2007 to 2010, the chart above shows that the Manning led Colts averaged a sack on 2.31% of their passing plays.  That's roughly the same sack rate that we saw for Manning in Denver, and a fairly ridiculous result.  In the years from 2011 through 2014, after Manning's departure, the Colts have averaged a sack on 4.84% of their passing plays, which again is about twice the rate of the Manning led years.  That's not a terrible result, but it is also quite similar to how the pre-Manning era in Broncos performed.  While Andrew Luck may be improving in this area, based on his 2014 sack rate of 3.36%, it is difficult to say whether his results will ever reach Manning's level in this area.

Admittedly, having Manning change teams gave us a somewhat rare opportunity to examine the degree to which these sorts of peculiar and positive effects are transferable, from one team to another.  Great players often spend the majority of their career in one city, which makes dissecting their real impact complicated.  Dropping them into a different environment, is often the closest we can really get to having a control group.  The only other way we get to test these sorts of things is when someone is injured.

That brings us to Tom Brady, and the time he missed the 2008 season due to a leg injury, and we witnessed the emergence of Matt Cassel.  We'll leave out the 'hurry' statistics this time.


           Year        Sack %            Primary QB
2007 2.81                   T. Brady
2008 7.93                 M. Cassel
2009 2.95                   T. Brady

Now, I suspect everyone will recall the 2008 Patriots season, and I suspect everyone will also recall the degree to which people scrutinized the way Matt Cassel filled in for the injured Tom Brady.  For the most part, people seemed to feel that Cassel filled in somewhat admirably for Brady, and in this atmosphere of deranged optimism the Chiefs traded Mike Vrabel and a high 2nd round draft pick to acquire Cassel.  They would also quickly give Cassel a $62 million contract extension.  What was overlooked in all of this lunacy was the precipitous drop in sack rate that occurred during Cassel's time under center for the Patriots.  The Patriots were getting their QB sacked 2.82 times as often in 2008, while Cassel was under center, as they were in 2007.  When Brady would return in 2009, the sack rate would magically go back to very much the same place it was prior to his injury.  There was clearly something missing with Cassel, that Brady seemed to possess.

The real question here is, do you think the Patriots offensive line was performing exceptionally in 2007, suddenly decided to tank in 2008, and then miraculously got their shit together in 2009?

Now, admittedly, using Peyton Manning and Tom Brady as an example of how a quarterback can influence a team's sack rate, can cause people to jump to some weird conclusions.  These guys are clearly rather peculiar players, and their influence over this aspect of the game is a bit unusual.  We're obviously not trying to suggest that all 'elite' (uggh, the "e" word) quarterbacks have this sort of effect on the results of their offensive line.  They don't.  From quarterback to quarterback, the ability to influence a team's sack rate can be wildly different.  For instance, we suspect that Alex Smith kind of makes offensive linemen look terrible, whether in San Francisco or in Kansas City, though that might be a subject for another day..  Without putting each player into a different environment, or having a method of establishing a control group, it's difficult to really pin down the precise degree to which one player influences the outcome of another.

Still, we do know that that this sort of influence from the QB position happens, even if we can't always perfectly measure it.

So, does it seem as if the person playing quarterback might have a fairly stunning influence on the public's perception of how the offensive line is performing?  Does it seem likely that transitioning from Tim Tebow to Peyton Manning, was probably the key factor in how the performance of these Ryan Clady and Orlando Franklin was perceived by PFF?  It certainly seems that way to us.  It really makes us wonder to what extent we should take PFF's grades for many of these things seriously, when their evaluation of a player seems like it could shift with the wind.  

Context is a bitch.

In this particular case, we were only discussing how an offensive lineman's pass blocking efficiency can be influenced by the person he is protecting.  The context of the situation does appear to matter, and this is something PFF frequently glosses over, or outright ignores.  Unfortunately, this lack of context is an issue that arises at nearly every position one can discuss.

When examining pass rushers, PFF brings out their Pass Rushing Efficiency grades, which are effectively the same thing as the Pass Blocking Efficiency grades, only turned on their head.  It becomes a simple calculation of how often a player was sent after the QB, and what percentage of the time this resulted in a sack (or a hurry).  Now, should a lone pass rusher be evaluated solely on the rate at which he gets to the quarterback, with no consideration given to how his teammates might affect his results?  Maybe a defensive end who gets 8 sacks, on a team that only produced 30 sacks in total, is more impressive than a similar player who produced twelve on a team that had 39 total sacks?  Maybe these two players are effectively the same?  Maybe it's not simply the rate at which sacks are produced by a player, but the degree to which a team's pass rush can come from multiple players, versus one isolated and therefore easily blocked individual? 

Is a wide receiver going to perform better when playing with one of the league's top quarterbacks?  Could having a viable receiving threat on the other side of the field influence a receivers' ability to perform?

Context...context...context.  It always matters, and yet frequently gets ignored by PFF, because it is probably the most difficult part of examining the NFL, and also perhaps the most meaningful question that needs to be solved.  Identifying how and why a player produces results, should get us closer to understanding who is actually contributing the most, rather than who is merely producing numbers.

If Player X performs to the PFF standard one day, they will be graded well.  If Player X perform poorly in the next game out, they will get a poor grade.   If Player X has a bunch of lovely green grades, with positive numbers, will that trend continue when he is placed on another team?  PFF can't/won't say, because their goal clearly isn't to predict the future.  PFF are basically like weathermen, who can only tell you if it rained yesterday.  Of course this approach doesn't really answer our real question, what is the true nature of Player X?  Is he essentially good, or a bum?  

Interestingly, we think PFF has placed themselves in a position where they will never have to admit that they are wrong.  The complex soup of the NFL, and the way teams assemble their rosters, can make pursuing the answers to particular questions very difficult.  That may be where the true genius of PFF really lies.  Rarely, if ever, do I see them say "according the this statistic, we feel that this player is the best at their position".  Instead, they frequently just list players in order, according to their grades in a particular area, and let you come to the conclusion "Hey, this guy must be the best!".  PFF's pretty numbers may nudge you in a particular direction, but you wind up at this conclusion all on your own.

Maybe PFF is misleading.  I don't know, and I'm not sure I would really want to say anything about that.  All I can say is that the degree to which PFF's statistics are being taken for gospel (at least by some people), might be a bit premature, and it makes me a bit uncomfortable.  It's particularly worrisome when I see some fans, reporters and game day announcers, discussing PFF grades without really digging into the subject itself, or questioning what the numbers are based upon.  Don't get me wrong.  PFF does have valuable information buried in their numbers, but people need to really analyze them, and question what the data means, rather than blindly trusting PFF's interpretation of the facts.

There's also a certain utility in these statistics, which can be destructive.  Even when the numbers are possibly flawed, or being applied incorrectly, they can be used to intimidate others, and end debates.  The analysis of what is really going on in football is still so clearly in its infancy, that silencing discussion would seem to be unfortunate, and counterproductive to our real goals. 

I suppose Reilly and I were also motivated to broach this subject because of a recent announcement made by PFF, about how they will be conducting their business in the future. Going forward, it appears PFF will no longer provide access to their raw data (a useful tool to many of us geeks), and instead only deliver their processed and pasteurized grades for players (pretty much worthless).  So, they will continue to provide their analysis of the data, while removing access to the data upon which their judgment is based from the eyes of the public.  As a friend pointed out upon hearing this announcement, they will effectively be charging people for the sort of "Overall Grades" that you find in the Madden video games.  Added context, or second guessing their interpretation of the data, clearly aren't something PFF is interested in.

As they also mention in this announcement, this new (inferior) form of data they will be providing, will be the same as the data that they provide to 19 NFL teams.  If it doesn't worry you that NFL teams could potentially be making decisions based on the grades that PFF has been providing, well, welcome to the new NFL.  Personally, I'm a bit annoyed about where this is all leading.


  1. Totally agreed and I check back at least once a month your articles have way more substance than PFF ever does.