I've already briefly mentioned some of my weird thoughts related to the athletic qualities I look for when drafting NFL centers. There have also been some earlier posts that lightly explore the traits that I think separate guards from tackles (seen here and here). Still, the question remains, could you actually construct a complete offensive line based on these sorts of simple-minded measurements?
Answering that question is slightly complicated. Still, I thought it would be fun to look back over the past ten years (the 2003-2012 draft classes), and see how the computer could have theoretically done, compared to a typical NFL team. However, since hindsight is 20/20, we have to establish some rules for the computer, to keep things as fair as possible. Many of these rules will significantly handicap the computer, but I wanted to avoid having things appear to be rigged in the computer's favor.
Rule #1- The average NFL team drafts 1.28 offensive linemen each year, or 12.8 players over the ten years we will be looking at. Having the computer select one offensive lineman per year is simple enough, but those extra 2.8 players present a problem. Deciding which years that the computer could pick an extra offensive linemen would be too much of an advantage, since we now know which years had better talent. Instead, I decided to give the advantage to the NFL GMs, and not allow the computer to make the additional selections. So, the computer gets to pick just one player per year, and that's it. NFL GMs will have 28% more opportunities than the computer to get a quality player.
Rule #2- The Kangaroo Score gives equal weight to a player's vertical jump and broad jump. The Agility Score gives equal weight to a player's short shuttle time and their 3-cone drill. Since both scores are given in the form of how many standard deviations above (or below) average that a player is relative to their peers, if we combine both scores we will have a Combined Score that gives each of these four drills equal weight. So, the computer will simply pick whichever player has the best overall results, in terms of general athletic ability. Since I feel that some of these drills matter more for some positions along the line than for others, this is a significant dumbing down of the computer's ability, and I wouldn't suggest taking such a simple approach in reality. The computer is basically being hit in the head with a rock before making its picks, and looking for a 'generic offensive lineman' rather than looking for specific positions along the line.
Additionally the computer will filter out any offensive lineman whose forty yard dash time is worse than 5.20 seconds (the actual average time for linemen is approximately 5.22 seconds). This will be a simple pass/fail type of test, so there will be no benefit to performing exceptionally well here. A player simply has to be average as far as their 40 time is concerned.
Rule #3- The computer can't pick a player unless that player would have been available in the 3rd round or later. There's no point in having the computer select someone, if that player most likely wouldn't have been an available option to most teams. So, every offensive linemen that the computer picks will have been passed over twice by every single team. Similarly, the computer can't select anyone who went undrafted, as I think this would also give some advantages to the computer. Overall, I think this is a fair compromise that tilts things in the favor of the NFL GMs.
With the computer now functioning in a somewhat brain damaged manner, we can now look at who it would have selected in the last ten years. We'll call this the Lobotomy Line, since I think that accurately captures the way that the computer has been impaired. As well as displaying each selection's 40 time, Kangaroo Score, Agility Score, and Combined Score (this being the only score that matters in the eyes of the computer for this little game), we'll also include a few other bits of data for those who are curious. First of all, will be their Draft #, the overall selection that they were chosen with by an NFL team. Secondly, there will be the % Pot GS (Percentage of Potential Games Started). If a player was selected ten years ago, they could have potentially started in 160 games (not counting the post season), and so on. Games not started due to injuries, being out of the league, or other unfortunate circumstances, will still count against a player. This will only be based on a player's "starts" through the 2012 season.
|Player||Year||Pick #||40 yard||Kangaroo||Agility||Combined||% of Pot. GS|
Since there is no universally agreed upon standard for judging offensive linemen, we'll begin by moving the players who started less than 40% of their team's games to the reject pile, before examining the players that remain a bit more closely.
Let's look at the computer's failed picks first, which would include Lydon Murtha, Kelly Butler and Seth Wand. Murtha seemed to show some potential for the Dolphins, but after a couple of injuries, he has magically disappeared from the league. Kelly Butler and Seth Wand, though managing to start 21 and 18 games respectively, also get dumped on the trash heap. Funnily enough, if the computer had been allowed to select players that went undrafted, it would have chosen 5 time Pro Bowler Jason Peters in 2004.(1.136 combined score) rather than Kelly Butler. Interesting food for thought, don't you think? Regardless, it is something we have to ignore (now that I have craftily planted the idea in your mind). I am only mentioning this particular example because it is unlikely, in reality, that we would be selecting a tackle (Butler) with such a poor agility score. In the end, the computer seems to have wasted a 3rd, a 6th and a 7th round pick. As for Brandon Brooks, who only just became a starter in 2013, most of the signs are pointing towards a positive outcome here, and I am still ridiculously confident that things will turn out well for him.
Now, we can look at the computer's better picks.
The Evan Mathis situation is particularly interesting. Generally regarded as one of the top guards in the NFL, his career got off to a slow start. Some people will claim that the early stages of his career were hindered by injuries and that he was slow to develop. People would probably credit his current success to being "coached up", which I feel is most likely nonsense. In his first 6 seasons, he was on 3 different teams, playing in a total of 58 games (with 22 starts). How many sacks did he allow in this time? Well, according to the information I can find, he only allowed 3 sacks (0.136 sacks per game started, in his first 22 starts), which is quite exceptional. Despite the lack of interest that teams showed in him, I don't see any evidence to suggest that he was really struggling. Since then, his numbers have improved even further, to 0.084 sacks per game started.
Everybody knows RT Eric Winston, and would probably agree that he has generally been quite good. Do the Texans miss him? I would think so. Derek Newton (-0.667 Combined Score) has, so far, been a rather depressing replacement for Winston.
Then, there is RT Doug Free, who will probably create a bit of controversy. While Free has had his ups and downs, he is currently having a spectacular 2013. Whatever people may think of him, I suspect he is still better than what most teams have at RT. For a 4th round pick, he seems to present excellent value.
Next, we have G Josh Sitton. Despite the sometimes poor reputation of the Packers' offensive line, I don't think anyone would direct the blame towards Sitton. Unlike Evan Mathis, who took longer to gain recognition, Josh Sitton quickly seized a starting job, and is often ranked among the top guards in the NFL.
If LT Jared Veldheer played for someone other than the Raiders, he might be a more recognizable name. While he struggled some in his rookie year, which isn't surprising considering his transition from Hillsdale College, he has since that time entered the discussion as one of the top young OTs in the league. If he had a weakness early on, it is that he probably committed too many penalties (though that has steadily improved), but he doesn't give up sacks.
Last, but not least, we have C Jason Kelce. Despite being picked in the 6th round, he has started every game since his rookie year, except for those he missed due to injury in 2012. He doesn't give up sacks, allowing just 0.055 per game started, and has generally been a favorite of the guys over at Pro Football Focus. Make of this what you will. Seems to be one of the more intriguing young centers on the rise.
Overall, regardless of your methods for judging offensive linemen, I think most people would agree that 50-60% of the computer's picks (while operating in 'idiot mode') have been quite good, and this could climb as high as 70% depending on how things work out for Brandon Brooks. In the end, I am quite surprised and pleased with the results, and suspect this would have produced a rather intriguing, and possibly spectacular offensive line. While some of the players have areas of relative weakness, I have a hard time imagining that putting them together as a unit wouldn't magnify their strengths. Though I know this is obvious to everyone, players do seem to perform even better when teamed with other superior players.
Now, let's take a look at the results of an actual NFL team. Which team will we choose as an example? Well, since I've already criticized the Ravens a fair bit in the past, we might as well use them. It's probably simpler than offending a different fanbase. That the Ravens are generally perceived as a team that drafts well, also has the obvious advantage of not appearing to pick on the helpless (hmm, like the Jaguars, perhaps).
|Player||Year||Pick #||40 yard||Kangaroo||Agility||Combined||% of Pot. GS|
The Ravens seem to be following more of a whimsical 'just throw the dart and see where it lands' approach to drafting linemen. When using the same "40% of potential games started" criteria, we get a rather extensive list of probable failures for the Ravens. This includes Gino Gradkowksi (now starting, though doing very poorly), Jah Reid, Ramon Harewood, Oniel Cousins, David Hale, Adam Terry, Brian Rimpf, and Mike Mabry.
When it comes to the Ravens' "successes" we wander into much murkier territory.
When we look at OT Tony Pashos, we might as well also look at Michael Oher. Despite the difference in their draft positions, and the expectations that come with that, I think a reasonable argument could be made that there isn't a huge difference between them. In some ways, Pashos might even have a slight edge over Oher. When it comes to sacks allowed per game started, Pashos has allowed 0.45 to Oher's 0.50. So, a minor point in Pashos' favor. If we look at their penalties per games started, Pashos has 0.50, to Oher's 0.59. Again, a point in Pashos' favor. So, the question becomes, is Pashos underrated, or has Oher been a bit overrated? Or, are both of them just mediocre? It seems hard to suggest that Pashos has faced easier circumstances, moving around from the Jaguars, 49ers, Browns, Redskins, and Raiders. In the end, I don't suspect either player is someone that most teams would be clamoring to acquire, though both are probably serviceable. Still, they seem like the sort of players that always leave a team searching for somebody better.
While I personally thought Jason Brown was a rather good center, during his time with the Ravens, his reputation/performance seemed to take a hit when he signed on with the Rams. At the very least, I think we can say he was enough of a success to briefly be the highest paid center in the league. What was it that eventually went wrong, when he wound up in St. Louis? I have no idea.
Current Redskins' guard, Chris Chester, also creates an interesting debate. I am forced to count him as a success because of the number of games he has started, but I find it hard to believe that anyone would argue that he is particularly good. I suspect most Ravens fans would laugh at this suggestion. You can make up your own mind about this situation.
The selection of guards Marshal Yanda and Ben Grubbs, are probably the two most undeniable successes for the Ravens in the last 10 years. Both have been highly regarded, and I can see no reason to complain about either pick.
Lastly, we come to guard Kelechi Osemele. Despite the initial optimism people had about him, he appears to have been on a gradual downhill slide since his rookie year. While he gets starting opportunities, he has so far been rather unimpressive. Many people seem to be suggesting that his struggles are due to the talent that surrounds him, but I don't think a genuinely great player would need to have such excuses made on his behalf. Still, it is early in his career, so who knows what will happen here?
I suspect someone will want to point out Jah Reid's lack of success, despite his excellent measurables, as compared to Ben Grubbs, who was athletically somewhat below average. All I can say here is "Yup, this sort of thing happens". A player's athletic ability, as measured at the combine, doesn't guarantee success or failure. I merely want to suggest that the odds are generally in favor of the athletically superior, and worth betting on. Players tend to meet the expectations set by their athletic ability more often than they exceed them, at least as far as I have been able to judge these things.
If we are being extremely generous, this would mean that 46.6% of the Ravens selections could be called successes. Still, I think that most people would agree that their true success rate is probably a fair bit lower than this. If we determined that just 2 of the Ravens' 'successes' were highly debatable (take your pick as to which 2), their success rate would drop to 33%, and we could be even harsher if we chose to. How you choose to evaluate this, in comparison to the computer's selections is entirely up to you. One thing to remember though, is that the Ravens, with their 15 selections, had 50% more opportunities to get it right. A full third of their picks also came in the first two rounds of the draft (with two 1st round picks, and three 2nd round picks), where the computer was forbidden to make selections.
When we try to evaluate the likelihood of a player developing into someone useful, we also run into some peculiar issues. The average player's result, in terms of percentage of potential games started, isn't that much worse for the Ravens than it is for the computer. The Ravens come in with an average result of 40.49%, versus a somewhat better result of 48.82% for the computer. The problem we have here is that the Ravens results seem to be heavily weighted by a few outliers, that are having a heavy influence on their results. This largely seems to stem from their higher draft picks, who get many starting opportunities regardless of their actual performance. If we look instead at the median results for percentage of potential games started, the Ravens fall to 21.87%, versus 44.85% for the computer. That would be a rather commanding lead for the computer's picks.
While I think the computer's theoretical draft results could arguably be considered to be twice as good as those of the Ravens (depending on how generous you want to be towards the Ravens' picks), there is the question of whether this could be accomplished in reality. Let's look at some obvious angles from which all of this could be criticized.
1. This all assumes a reasonable knowledge of where the players would actually be selected. In reality, how would you snake these players before someone else selected them?
Generally, the pre-draft speculation as to what round players will be selected in is fairly accurate, though this accuracy diminishes towards the later rounds. I have no problem with taking a player as high as a full round ahead of where the conventional wisdom says he is projected to go, if it means getting the player I want, and if he is someone the computer deems a best-in-class prospect. People will probably criticize this as "reaching" for a player, while ignoring how bad most teams more conventional "value" picks actually are.
2. Some of the computer's picks were relatively high in the 3rd round. Wouldn't many teams have to actually select them in the 2nd round, if they were intent on acquiring them?
Well, that's true, though trading up in the third round is a possibility too. Having to take a player in the second round, in order to get them, also opens up the possibility of taking other, potentially more highly rated players, that we didn't examine here because of the rule forbidding us from going after players in the first 2 rounds. Even when I ran this same little test with the computer restricted to picks that were available in the 4th round or later, the computer arguably did at least as well as the typical NFL GM.
3. So, the computer was right maybe 50-60% (though I suspect it will soon be 70%) of the time. Big deal! That's not that impressive!
Well, in reality, I don't think anybody is likely to see a success rate that is significantly higher than 70%. Somewhere around the 70% mark I've run into the issues of injuries, teams potentially neglecting/misusing talent, and the general unpredictability of human nature, as significant obstacles, regardless of how I approach this subject. I realize that people want to find some sort of magic formula that will result in 90% of a team's picks being outrageous successes, but this simply isn't going to happen. We also have to remember that the computer is operating with at least one hand tied behind its back in this little game. With that said, I still think these results are probably surprisingly close to optimal, even if I would prefer to do things somewhat differently in reality (weighting the scores differently, and including more data in other areas). While I aim for the 70% mark, hitting on 50-60% is still a decent outcome, and significantly better than what most teams could probably manage. It is also worth remembering that NFL teams, on average, only have about 21.5% of their picks turn out to be successful. We're just looking for a little edge, that over time will tilt things in our favor.
Remember, this is just a game I'm playing here, so take this all with a grain of salt. While we can incorporate more data than I've shown here, and weigh the data more intelligently than we have in this incredibly simplified example, I think it might illustrate some of the potential benefits of using objective data rather than the "gut feeling" that most GMs seem to rely upon. While I won't quote the great philosopher Rob Gordon, I think we all know what our guts are full of. If nothing else, I think all of this raises some interesting questions relating to the supposed expertise and 'eye for talent', that GMs and scouts allegedly bring to the table.