They laughed at my theories.Â They threw tomatoes when I presented my paper at the academy!Â Tomatoes I tell you!Â My minions cower in terror, shrinking in fright from the very ideas contained herein!Â But I will show them!Â I will PROVE IT TO THEM ONCE AND FOR ALL.Â **The FOOLS, I WILL DESTROY THEM!!** **MWAHAHAHAAAA! **(ask me how)

Oh, sorry.Â Where was I?Â Apparently, there’s this basketball thing going on.Â Some sort of NCAA tournament that will prove who has the best basketball team.Â But what if it doesn’t?Â What if it’s all just arbitrary?Â Could it be that the chances of any team winning a game are not deterministic, but rather stochastic?Â I’ll admit that I don’t know that much about basketball.Â I mean, I played the sport in junior high.Â I do know the rules.Â And I even think that it’s a pretty game.Â But I don’t follow the ins and outs of a particular season.

So what’s a guy to do when he doesn’t really follow basketball, but you live in NC where bball is life and it’s bracket time?

You model it.Â Â Which is exactly what I did.

The basic model:

- Compute a team’s wins minus their losses, I’m sure there’s a word for this, but let’s call it demonstrated strength (D)
- For a given match-up, take a draw from a Beta distribution parameterized by each team’s demonstrated strength (D1 and D2)
- The resulting draw is the
*probability*that the team representing the first parameter wins - Draw from a uniform random variable to predict if that team actually will win

There are some flaws with the model, the two obvious ones:

- Different teams have different schedules, so one team with a 30-5 record might be a lot better than another with a 30-5 record in a different conference (I’m looking at you SEC)
- It’s not clear that you should parameterize directly on the demonstrated strengths.Â There should probably be a scaling factor in there.Â So that rather than drawing from Beta(D1, D2), you should draw from Beta(alpha*D1, alpha*D2)

But this is close enough.Â The nice features of the model are:

- The expected probability that a team will win is proportional to D1/(D1+D2).Â So, a team whose wins outnumber their losses by 10, will have an expected probability of winning of 50% when playing against another team with D2=10.Â And only a 33% chance of winning when playing against someone with a D2=20
- The closer two teams’ demonstrated strength is to zero, the broader the probability distribution is.Â This reflects added uncertainty for two teams who win only slightly more often than they lose.
- The larger two team’s demonstrated strength is, the narrower the probability distribution is.Â For example, D1=20, D2=40 has the same expected probability as D1=10, D2=20; but because this is a more common pattern for the two teams, we don’t have the same variance.
- This is actually pretty rigorous in Bayesian terms.Â Throughout the season, we can update the posterior distribution of the probability of winning based on the prior distribution and the most recent game.

So, how well does the model work?Â Good question.Â I used it on ESPN, and it’s currently ranked in the 92.9th percentile, i.e., better than almost 93% of all ESPN brackets.Â All of my final four teams are still alive, and in general, the model predicted several of the biggest upsets in the tournament (e.g., Murray State vs Vanderbilt!).Â That said, this is just one random draw from the model.Â To test it further, I would like to go through a whole season of games and figure out if the probabilities of winning correspond to the statistics of a Beta distribution for the game’s D1 and D2.Â Moreover, I would like to infer the alpha parameter that I mention above.

If the model appears accurate, and we can properly infer alpha, then we get a probabilistic assessment of how feasible it is to even pick tournament champions.Â It may just be that at the end of the day, it comes down to luck.

[…] Alkahest There'll be no more cigarettes. No more having sex. No more drinking until you fall on the floor. No more indie rock. Just a ticking clock. You have no time for that anymore. Bigger Font Size Smaller Font Size Left Align Justify Align Right Align Bookmark This Page Print This Page « They laughed at my theories! […]

Pingback by Alkahest » Blog Archive » Model update — March 22, 2010 @ 9:09 pm