As part of the continuing series on statistics, the next few articles will discuss PFF ratings. Their ratings are used by many but tend to be controversial by some.
Profootballfocus.com has several components. It has a free site which includes articles, often based on their own database. In this regard, it is similar to what this board does.
It also has a premium site that one can subscribe to get their database of statistics. Here they include two major additional parts
- A basic rating of every player for every game that is updated each game with cumulative statistics and some additional signature statistics
- Some group statistics, such as their rating of the pass blocking efficiency ratings.
The premium site including their database is controversial. This article will discuss PFF overall and its methodology. They attempt to measure the individual components of what is a team sport. That is a difficult endeavor. I will discuss in later articles how one could make adjustments to their data and the group stats
Many draft sites have popped up that rate each player in many ways. Rabblerouser provided a good evaluation of various sites:
There is not the same level of analysis to evaluate current NFL players. PFF is about the only source, and as such, gets lots of attention. One might not like PFF data, but ANY data is better than nothing. At least folks can discuss elements using the same relative terms and background.
I have said from day one, if you don’t like their data – GET BETTER DATA. I encourage that and outlined a method for BTB/SN to do so
As Birddog26 said
If not for PFF, there would be a huge hole in the stats that fans can look at and use. They do a great job considering they do not have a few million in payroll budget that some teams put into research along with state of the art equipment. I will say, that they are probably better than 2/3’s of the NFL teams out there at grading.
Floyd1 grasps the major use for PFF
At least the grades allow the coaches to work on the more deficient problems with individual players. Now I don't know if the coaches grade differently, but these grades give us an idea what the players are doing right and wrong.
I discuss the use of statistics for diagnostic purposes in
The first and best advice to give on PFF methodology is to read their methodology for grading.
One should be careful to not compare statistics from two or more different sites. Individual sites can, and do, subtly change their definitions from other sites, but as long as THEY follow their own methodology, one can use their stats and analysis internally.
This article will highlight their main thrusts underlying their grading. They seem to be relatively reliable and valid for the purposes to which they aim.
You cannot compare players at different positions. PFF norms [read curves] the positions differently so a grade at one position is not the same as another. One way to compensate for that effect is to compare each player to how they rank among their peers. Using the percentile basis, we can note that Livings was better at guard than Smith was at tackle compared to their peer groups but not necessarily say that Livings was better than Smith.
They evaluate each player, and each position, for each game. Each position has several sub-components which they sum together to get an overall grade for each game. There are many skill sets for each player and some players have more of one skill than another. For example, for offensive tackles, they rate
The overall score is the sum of the pass blocking, screen blocking, run blocking and penalty scores. The individual skills tend to be more accurate as they are more easily defined compared to one big number. The overall number is often used for ranking purposes; however, their individual sub-scores can be individually sorted and ranked. For example OCC uses just the receiving scores for evaluating TE’s instead of the overall score.
Their evaluation of these individual skills per play, though they recognize it is difficult to measure the impact of individuals on a team sport. They are quite open that they say they will miss some and thus give no ratings on some plays if they cannot determine the individual component of a play.
Further, one should note that they will adjust the official stats so that they are easier to use mathematically. As they say
“Statistics in their raw form are considered objective. But in our opinion, with the small number of NFL games played each season, raw stats are very often unintelligent. If a QB throws three interceptions in a game but one came from a dropped pass, another from a WR running a poor route and a third on a Hail Mary at the end of the half, it skews his stats by far too great an amount to be useful. Our “subjective” grading allows us to bring some intelligence to the raw numbers”
Players are evaluated with a range of scores. They consider the range from -1 to 1 about average. Zero is the mean score and is considered average. Outside those range is considered noteworthy. Below -1 is considerably below average and above 1 is considerably above average. One should note the distribution of a normal bell curve.
One of the more controversial aspects of their scoring involves penalties. They will provide a score for each game based on penalties. The say, with some justification, that a penalty by an individual player is a bad play and should be evaluated as such. They keep track of penalties as a separate sub-skill and then add in that along with the other sub-skills in the overall grade for each game and cumulative scores. Yet, the effect of penalties on the final evaluation seems to be overrated by many who complain.
For example, Smith had an overall score of 3.8 but even if you added back his penalty score of -3.3 that would be a 7.1 overall that would move his ranking from 40 to tied with 32 of 80 rated tackles and change his percentile ranking from 50 percentile to 60 percentile.
The biggest thing was his first two games with scores of -3.1 and -5.2 respectively. The major thing is that he improved significantly from those two games.
One gets an evaluation for each game played. Players with more games played have more opportunities to change their grade. Thus their cumulative scores are most valuable for extremes, the best players who played all the games and the worst players who played each game.
The players in the middle may be based on several factors
<!--[if !supportLists]-->- <!--[endif]-->stars who missed many games
<!--[if !supportLists]-->- <!--[endif]-->good backups who play well behind stars in many games
<!--[if !supportLists]-->- <!--[endif]-->backups who get to play in place of injured stars who miss games
<!--[if !supportLists]-->- <!--[endif]-->lesser starters who play average on all games
<!--[if !supportLists]-->- <!--[endif]-->players who play outstanding in a few games and subject to small sample size bias
One should take their rankings based on cumulative scores into account for the middle players and make adjustments. Further adjustments are discussed in another article.