Chaos Rankings

Thursday, August 30, 2007

With Apologies to Big Red

My bad. I totally forgot that Western Kentucky was moving up to Division I-A this year. Yes, I'm always going to call it that instead of NCAA's newly preferred Bowl Subdivision moniker. In any case, I added Western Kentucky to the preseason rankings with a pre-quantized score of 0 (since I have no data for them from last year, we know nothing about them) and a corresponding rank of 64. This had no effect on teams ranked higher and only pushed each team ranked lower down one spot. No big changes, but I wanted to make sure everyone was aware of it, you know, in case you're a Temple fan and were just saying how your team can't get any worse, only to discover that the Owls slipped from #119 to #120. As for Western Kentucky, enjoy your first game as a I-A member against the Gators in the Swamp on Saturday. Welcome to the big time.

Saturday, August 25, 2007

How I Got from There to Here

Behold! As long last, the 2007 preseason Chaos Rankings are available, and just in time for the season to start! I modified the way I calculated these slightly from what was originally posted below. I took last year's final rankings and expanded them to be from -1 to 1 rather than 0 to 1. This essentially makes a team with a score of 0 the average team. Then I gathered all the information I originally mentioned, except that I doubled the value of each element. This yielded an adjustment factor between 1 (a team identical to last year) and o (a team completely different than last year). This number was multiplied by the expanded score from last year to obtain a preseason score for this year, then that number was requantized between 1 and 0 to obtain the scores shown here. I went this route because it allows bad teams to move up in the rankings by replacing coaches or players from a previously bad team. So if you had a negative score (meaning you suck) and replaced your entire squad and main coaching staff, you net a score of 0, putting you back in the middle of the pack again rather than as a lowly bottom feeder. I think this is more fair than what I previously intended to do, but as you can see below, few teams really moved very far anyway. I also want to make it clear that these rankings have no impact whatsoever on the upcoming season. Unlike other polls (*cough* USA Today *cough*) which go a long way towards determining a #1 team at the end of the season by placing them #1 at the beginning, effectively forcing them to screw up really badly and every other team to play flawlessly, my regular season rankings use data from only games in that season, therefore determining the best team during that season based on performance and not preconceived notions of "best" or historical precedent.

2007 Preseason Rankings

Previous Rank
Rank Change
11.000000Boise State10
30.961866Ohio State30
40.953999West Virginia95
160.845963Virginia Tech171
180.810576Wake Forest202
210.779708Oregon State254
220.771823Texas A&M242
230.737749Penn State296
250.725940Notre Dame21-4
280.713050Boston College18-10
290.706113South Florida323
310.697052San Jose State354
320.695393Southern Miss386
380.687260South Carolina413
390.685102Georgia Tech37-2
400.661452Western Michigan422
410.657083Texas Tech39-2
430.625553Oklahoma State463
450.621103Central Michigan26-19
490.586600Florida State534
540.580888East Carolina562
550.578867Northern Illinois550
580.566689Washington State591
590.562796Arizona State54-5
600.558466Middle Tennessee State611
640.544304Western KentuckyN/AN/A
650.541166Kansas State64-1
690.522006Kent State68-1
700.513813Air Force799
740.501750Louisiana Lafayette73-1
750.493818Michigan State8611
760.483074New Mexico72-4
770.481581Iowa State9821
780.480325Arkansas State71-7
800.460890North Carolina State10222
850.439162Ball State77-8
860.439058North Carolina10418
900.427071Bowling Green89-1
910.413524North Texas11019
940.399735Florida Atlantic81-13
960.394612Louisiana Monroe85-11
970.392200Fresno State90-7
1000.374872Louisiana Tech11313
1040.344400New Mexico State91-13
1060.328971Florida International11711
1070.326673Colorado State92-15
1090.305105Mississippi State101-8
1120.249499San Diego State106-6
1160.231051Miami (Ohio)107-9
1170.122599Eastern Michigan114-3
1190.051830Utah State116-3

Tuesday, August 07, 2007

"Just Say No!" to preseason rankings. Maybe. Or not. Whatever.

[Insert insane sports commentator cliche statement here] That's right, it's August, training camps are in full swing, and college football is less than a month away. Numerous (what? one is a number!) people have been asking if I'm going to put up my own pre-season rankings. Initially, I said no. My ranking system is based entirely on on-field performance, and with no performance data, there can be no accurate rankings. However, after some thought, I came up with an idea that just might work. Unlike the actual rankings, which is meant to indicate who is the best team, the preseason rankings should indicate who is likely to be the best team. My plan is to use the final rankings from last season as a starting point and then adjust them for the following factors: number of returning starters and changes in coaching staff. Look at it like this: if you were the best team last year, but you lost your head coach and half your starters, there's no reason to believe you'll be the best team anymore (unless every other team endured the same losses).
Some of the necessary information is easy to come by, some is harder, so if, as you're reading this, you know of a good source for any of this information, please let me know. I'd really like to find a reliable source rather than having to go through all 119 teams' homepages one at a time and individually analyze depth charts and coaches. Here's the information I plan to use and how I plan to use it:
  • Head coach - if a team has a new head coach, they incur a 15% penalty unless the coach was either OC or DC during the previous season, then then incur a 5% penalty.

  • Offensive coordinator - if this position has changed, a 5% penalty is incurred.

  • Defensive coordinator - if this position has changed, a 5% penalty is incurred.

  • Offensive starters - For each of the 11 starters from last season who is not returning, a 1% penalty is incurred.

  • Defensive starters - For each of the 11 starters from last season who is not returning, a 1% penalty is incurred.

  • Punter - if this position has changed, a 1% penalty is incurred.

  • Kicker - if this position has changed, a 1% penalty is incurred.

  • Kick returner - if this position has changed, a 1% penalty is incurred.
Each penalty is cumulative. By which I mean that the percentage points for each penalty are added together before being applied. So for example, if a team has a new OC and 19 returning starters, they would take a 8% hit to the final ranking from last year. In a worst case scenario, a team would only take a 50% hit, effectively relegating the best team to the middle of the pack, which, considering that we (numerically speaking) know nothing about the current team, seems fair enough.
There are a couple problems with this plan however:
  • It punishes bad teams who have fired lousy coaches or who are replacing bad players

  • It fails to take into account especially deep teams who may rotate lots of players in and out, resulting in many experienced players despite the loss of starters

  • Not everything can be simply quantified. For example, a lost starting DE on team A might have been more valuable than a lost starting SS on Team B.
The first issue is probably fixable by inverting the penalty to be a bonus if a team had a poor rating last year, at least for a coaching change, but then the question becomes at what point do you make that cut off? Furthermore, is anyone really interested in how bottom rungs teams are rated preseason? Here's what I propose, to keep everything consistent, I'm going to cut their score the same as everyone elses, then if they improve, they'll have exceeded expectations and have something to be proud of. The second issue I have no solution for, so suck it up and prove my rankings wrong. The third issue, well, you're right. But since my rankings are supposed to be as impartial as possible (ie, depending on numbers instead of feelings), that's just the way it's going to be. As I stated at the beginning of this project, yes, I'm aware that by virtue of the fact that I wrote the algorithm, it's subject to my own biases right from the get go. The goal here is to come up with a plan that makes sense, and then rate everyone using the same formula to arrive at results that seem to jive with popular opionion on most counts. So, enough blabbing, go find me my source of all this data and rankings will be forthcoming once I've acquired everything I need.