Search MAFLOnline
Subscribe to MAFL Online

 

Contact Me

I can be contacted via Tony.Corke@gmail.com

 

Latest Information


 

Latest Posts
« 2012 Round 19 - Wagers & Tips : There's Always Hope On A Sunday | Main | 2012 Round 18 Results : Small Loss, Small Comfort »
Tuesday
Jul312012

2012 MARS, Colley, Massey and ODM Ratings After Round 18

This week, MARS was the System with the second-highest number of re-ranked teams, shuffling four teams by one spot each, and two more by two places. Its two multi-spot movers were Essendon, down from 8th to 10th, and the Roos, up from 10th to 8th.

Ten teams remain rated above 1,000 on MARS, while two more are within a single solid win of enjoying the same status, but the overall trend is still towards Rating inequality. The gap between the teams ranked 4th and 15th now stands at a season-high 60 Rating Points (RPs), and that between the teams ranked 4th and 8th is at 19 RPs, also a season-high.

This week, in reviewing each team's Ratings momentum, we return to considering just the five most recent rounds, as we now no longer span the shortened weeks of the season if we stretch back to include each team's most recent five contests.

Over this period we see that only seven teams have been net RP accumulators, foremost amongst them the Hawks and Swans, who now sit 1st and 2nd on the MARS ladder, and the Roos and the Crows. All four of these teams have added about 10 RPs or more since the end of Round 13. The other three gainers have grabbed between about 4 and 8 RPs each.

That leaves 11 teams as net RP losers, the Dogs and the Dees being the most prolific shedders having lost about 17 and 12.5 RPs respectively. The Dons, Port and GWS form the next logical tranche of RP contributors, each having donated about 7 or 8 RPs.

The remaining six teams have dropped between about 1 and 5 RPs each, about the size that could be the result of a single, unexpectedly large loss.

These RP gains and losses across the five rounds have been sufficient to see only three teams' MARS ranking alter by more than two places as a consequence: the Roos have climbed three places to 8th, Sydney have climbed three places to 2nd, and the Dons have dropped three places to 10th.

Returning our focus to just the most recent round and switching now to look at the Rating Systems other than MARS we find that ODM has re-ranked the most teams this week, shifting 10 teams, two of them by two places (Fremantle up to 10th and West Coast up to 4th). Colley re-ranked only half as many teams, but also re-ranked two by more than a single place. It dropped Adelaide down three places into 4th, and elevated Hawthorn two places into 1st. Massey was the round's most circumspect Rater, altering the ranking of just four teams and all of them by only a single place.

This week I've broadened the historical context for the comparison of the predictive accuracy of the various Rating Systems by considering, instead of just the current season, every home-and-away game since 1999. For each contest, Rating Systems are assumed to select as the winning team the team that it rates more highly, regardless of which team is at home and which is away.

In each season all Raters except MARS commence their predictions from Round 2 as Raters other than MARS use only historical results from the current season to inform their Ratings. (Actually, in 1999 MARS also only starts predicting from Round 2 since Round 1 of 1999 represents "ground zero" for the current set of MARS Ratings. In every other seasn it starts predicting from Round 1.)

As the table below shows, MARS is the best Rating System based on this metric, this methodology and this time frame. To be fair, the parameters used in MARS were originally established four or five years ago on the basis that they maximised predictive accuracy, but MARS' general dominance has, pleasingly, persisted well into the post-sample period.

(Note that I've included both ODM Regular and ODM Average in these comparisons. For information about the differences in these two versions of ODM, see this earlier blog.)

 

Each row of the table is conditionally formatted such that the strongest performance is in dark green and the weakest in dark red. The middle columns of the table provide the rankings for each Rater in each season. MARS, as well as being ranked 1st across the entire expanse of the period considered, is ranked 1st in 7 of the 14 seasons.

For the other seven seasons, ODM Average and Massey are ranked 1st in 3 each, while Colley achieves this distinction in only one. ODM Regular is never ranked 1st in a season, despite being ranked 3rd overall across all seasons.

A reasonable hypothesis about MARS' superiority might be that it stems entirely from the early season benefit it enjoys due to the carryover knowledge it has about team strengths from the previous season, for which it has a single parameter that has been optimised. The other Rating Systems are forced to ponder only the scant tea-leaves of early-season form.

Here's the evidence with which to assess that claim.

Whilst it seems from this table to be generally true that Raters' predictive accuracy increases as the season progresses, this is as true for MARS as it is for the other Systems. In fact, from Round 12 onwards, MARS is ranked 1st for eight of the individual rounds (one of them jointly). ODM Average, Colley and Massey each head the pack in four rounds - two jointly in the case of ODM Average and Massey, one jointly in the case of Colley -while ODM Regular leads in just three rounds, two of them jointly. MARS' superiority then doesn't appear to be solely an early-season phenomenon.

To make the picture clearer still it helps to group the rounds. The tendency for each Rating System to improve in accuracy as the season progresses is now more readily apparent, as is MARS' general superiority except in the second "quarter" of the season where, I'd conjecture, it might be hanging on a little too tightly to its views based on the previous season, having not yet fully embraced the realities of the current one. 

It's interesting to consider the relative performances of the Colley and Massey Systems which differ only in that the Colley System ignores victory margins. (You can show, in fact, that the Massey System - at least the version I use in MAFL - is mathematically equivalent to the Colley System if you replace actual game scores with scores of 1-0 for the winning team).

In the first "half" of the season Massey prevails, suggesting that victory margins contain important information about relative team strengths in this part of the season. But, in the second "half", Colley prevails, suggesting that results rather than margins might be more important as the season progresses. One hypothesis for this phenomenon could be that later season blowout results in games involving teams without realistic prospects for a spot in the Finals, distort the information content of these contests. (And, dare I say it, tanking - if it exists - mightn't help either.)

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>