Search MAFLOnline
Subscribe to MAFL Online

 

Contact Me

I can be contacted via Tony.Corke@gmail.com

 

Latest Information


 

Latest Posts
Sunday
Mar282010

MAFL 2010 : Round 1 Results

It's hard to make money when you're betting solely on favourites.

The Heuristic-Based Fund landed 3 bets from 4 this weekend but the one it lost was the longest-priced and therefore most vital one from a profitability viewpoint, so it finished down 0.6% on the weekend. This leaves those with the Recommended Portfolio down 0.06% for the season - a mere flesh wound at most.

That means strike one for the Shadow Fund. Two more consecutive losses will see it hand over control of the Heuristic-Based Fund to whichever heuristic is best performed at the end of Round 3.

The fate of the Heuristic Fund notwithstanding, favourites generally fared well this weekend, winning six of eight games though covering the spread in only four.

Here are the details of the round:

(Three of this weekend's games were won by 56 points, which surely must be some sort of record, albeit a boring one.)

On the tipping front there's not much to report. The surfeit of victories by favourites resulted in most tipsters bagging 6 from 8 for the round, the only exceptions being Chi, who managed 5, and Home Sweet Home, who managed just 4.

Next we turn our attention to the performance of our margin-tipping heuristics where we find that BKB has started the season exceptionally well, producing the round's lowest mean and median absolute prediction errors.

Chi fared next best on mean APE, registering 27.63, and LAMP finished 2nd on median APE, recording a very respectable 19.5, its mean APE suffering significantly at the hands of the Dogs' and the Crows' losses.

Finally, let's take a look at the performance of the HELP algorithm this weekend.

It recorded 5 wins from 8 on line betting, which is barely better than chance.

As well as measuring HELP's win-loss record this season, I'm also going to assess how well it assigns probabilities to its predictions by using using what are called probability scoring metrics. There are three such metrics that are commonly used to measure accuracy of a forecaster's probability assessments in light of actual results:

  • The logarithmic score, which assigns a forecaster a score of 2+log(p) where p is the probability that he or she assigned to the winning team.
  • The quadratic score, which assigns a forecaster a score of 1-(1-p)^2 where p is again the probability that he or she assigned to the winning team.
  • The spherical score, which assigns a forecaster a score of p/sqrt(p^2 + (1-p)^2), where p is yet again the probability that he or she assigned to the winning team.

In a future blog I'll talk a bit more about the relative merits of each of these scoring approaches, but for now I'll just note that a naive forecaster, assigning a probability of 50% to every forecast, will return a score of 1 if the logarithmic measure is used, a score of 0.75 if the quadratic measure is used, and a score of about 0.71 if the spherical measure is used. These scores can be considered the minimum acceptable if we're to say that the HELP model is providing any guidance superior to coin-tossing.

Looking then at the scores for HELP in the last three columns of the table above, we can see that it performed across the weekend slightly worse than a naive forecaster would have, regardless of the scoring approach adopted. What hindered HELP this weekend was its assignment of an 87% probability to St Kilda winning on line betting and of a 90% probability to Adelaide doing the same. Its other incorrect forecast was that the Dogs would win on line betting, but it assigned a probability of just 59% to this outcome and so was not as severely punished for this error.

Wednesday
Mar242010

MAFL 2010 : Round 1

We're on.

From a wagering viewpoint, this season has started like most others, with the TAB Sportsbet bookie posting head-to-head odds more than a week before the first game and then leaving them unchanged for over a week. It's as if punters haven't yet realised the season's about to start. Indeed, as I type this, the odds are still what they were Monday a week ago.

MAFL, for its part, has also followed the early season script, wagering lightly, unwilling to risk much until the season's patterns have been established and calibrated. Only one Fund is active and only four bets have been made.

And so the cycle begins again for another season.

This year I've decided to do away with the need for readers of this blog to download PDFs to find out the details on wagers and results, so here instead are this week's wagering details, in-blog as it were:

As foreshadowed on the MAFL Online Twitter feed last week, we've four bets each of 5% of the Heuristic-Based Fund, tying most Investor's fates to the success of the Cats, the Lions, Port and the Dogs. The Cats are the shortest-priced at $1.18 and the Dogs are the longest-priced at $1.60, so there can be no suggestion that the Heuristic-Based Fund has subjected Investors to any substantial risk.

Amongst the remaining Funds, Hope will be released to wager commencing in Round 5 and all other Funds will be allowed off-leash one round later. Since all the Investors who've assembled their own portfolios this year have decided to weight the Heuristic-Based Fund zero, this week's Ready Reckoner is a simple one:

Clearly, no-one's going to be quaffing Moet on the strength of this weekend's results, but nor will anyone be stripping the lounge chairs to find loose change for food.

Investor profitability requires that at least 3 of the 4 wagered upon teams are successful and the Bulldogs must be among them.

Turning next to tips, you'll notice that HAMP and LAMP have been added to last year's cavalcade of heuristic tipsters.

As you can see, it's pretty much wall-to-wall consensus (the most common variety, of course), with almost every tip a nod to the favourite. The exceptions are Home Sweet Home, which has gone all contrarian in half of the games, and Chi, who's tipped the Roos to topple Port by a point, making it his Game of the Round.

HAMP and LAMP agree with Chi's game choice for Game of the Round but not with his tip for the winner. Bravely, ELO has plumped for the Hawks v Dees clash as its Game of the Round, oblivious to the fact that the TAB Sportsbet bookie has installed the Hawks as almost 5-goal favourites. That same bookie - and hence BKB - has opted for the Fremantle v Adelaide game as his Game of the Round (assuming that, when the line market posts, it's with one team or the other receiving 6.5 points start or fewer).

That leaves only one more set of predictions to reveal - those of the new HELP model.

I've decided to include the probability estimates alongside HELP's tips as I hope that tracking the calibration of HELP's tips will provide an interesting subject for discussion during the season ahead. To ensure that I can't be accused of blinding hindsight I'll put it on record now that I don't expect HELP to do well this year; on balance, I think it's probably been overfitted to previous data.

Well that's it for this blog. Remember that the first game of the season starts Thursday, not Friday, night. Investors can enjoy watching this game unfettered by financial considerations. Enjoy.

Saturday
Mar132010

Now Open: This Year's Blog Readers' Competition

Last season a few of you took part in a competition in which participants had to predict the finishing order for all 16 teams and the winner, Dan, was the person whose finishing order was mathematically closest to the actual finishing order at the end of the home and away season. Not only were Dan's selections, overall, closest to the final ladder, but for 3 teams his selections were perfect: he had Geelong finishing 1st, St Kilda 2nd and Melbourne 16th.

So, how remarkable was Dan's getting 3 teams exactly correct? One way of answering that is to note that someone guessing completely at random would be expected to exactly match the ranking for 3 or more teams only 8% of the time. On that basis then there's some statistical evidence to reject the hypothesis that Dan tipped at random (which for those readers who know Dan and the depth of his footy knowledge will come as no surprise at all).

How many teams do you think a person selecting at random could expect to match exactly? For our 16 team competition, the answer is exactly 1. What if, instead, there were 1,000 teams in the competition - how many then might such a person expect to guess exactly? Remarkably the answer is again exactly 1.

In fact, whether there was 1 team or 1 million teams the expected number of exactly matched teams is 1. There's a nice proof of this, if you're feeling game, in this document where it's called the hat-check problem.

Anyway, this year I'll be running what I think is a more strategic variation of the competition I ran last year.

Here are the details:

  • Each participant must select between 1 and 16 teams.
  • For each selected team, he or she must predict where that team will rank on the competition ladder at the end of Round 22.
  • It is permissible that more than one team is predicted to finish in the same ladder position.
  • Participants will be assigned points for each team whose finishing position they've predicted. These points will be based on the absolute difference between the predicted and actual final rank of the team according to the following table.

So, for example, if a participant predicted that a team would finish 4th and they actually finish 7th, the absolute difference between the actual and predicted finish is 3 ladder positions, so the score for that team would be +1.

  • The winning participant will be the one whose aggregate score is highest.
  • Entries must be received by me prior to the centre-bounce for the first game of the season. Entries can be e-mailed to me at the usual address or can be posted as a comment to this blog.

In keeping with the tradition established last year, the winner will receive no monetary reward but will be feted in an end-of-season blog. Priceless.

A full example of how the scoring works might be helpful and is provided in the table below. The upper section of the table provides the final ladder for some imagined season (actually, it's last year's) and the lower section shows how the scoring would work for the alphabetically privileged Person A who decided at the start of the season to predict the finishing positions of just 3 teams. For his troubles he scored +31 points, this score bolstered by his correctly predicting that St Kilda would finish 1st.

The scoring mechanism for the competition this year encourages participants to provide predictions for all those teams that they feel they can peg to within 3 ladder positions since doing so will add to their final score.

This final table demonstrates that making more predictions won't always be better than making fewer, especially if one of the extra predictions winds up being significantly erroneous.

The competition is free to enter and you don't need to be a Fund Investor to participate. So, if you're thinking about putting in an entry, you might as well have a go.

Friday
Mar052010

Tips for Tipping

 

You've been forced to enter a tipping competition for a sport you know nothing about and, frankly, have no interest in adding to the list of sports you know something about. The competition is standard in that it rewards you with 1 point for a correct tip and no points for an incorrect tip. Your 15 minutes of research, which represents the entirety of the time you're willing to spend on the endeavour, reveals that the home team consistently wins about 58% of games season after season.
Given that knowledge, which if any of these three strategies is superior:
  1. Match the expected home team/away team mix in your tips, that is, tip the home team 58% of the time
  2. Recognise that you have no basis on which to favour one team over another so tip the home team about 50% of the time
  3. Be lazy and always tip the home team
Make a mental commitment to your answer and try to come up with at least an intuitively appealling logic for it.
Option 1 has much to recommend it. If nothing else it means that you'll tip home teams about as often as they tend to win (and, consequently, also tip away teams about as often as they tend to win). Following this strategy you can expect to be right 0.58 x 0.58 + 0.42 x 0.42 = 51.3% of the time. Well that is better than chance.
Option 2 is now looking a little sick. Using it, you can expect to be right 0.58 x 0.5 + 0.42 x 0.5 = 50% of the time. That's worse than option 1, so scrub this strategy from the list.
Option 3, as it turns out, reigns supreme amongst this set of options since it will make you right, on average, 58% of the time. Surely that'll be enough to convince some people to believe that you know quite a bit about the sport.
Bear this analysis in mind when you're reviewing the tipping performances of the experts in the papers during the season ahead. The figure of 58% is about right for the proportion of home teams that have won recently in the AFL (if you count draws as one-half a correct tip - it's a trifle lower if you count them as losses). The figures for the past four seasons are:
  • 2006 - 58.4%
  • 2007 - 58.1%
  • 2008 - 57.3%
  • 2009 - 57.8%
Actually, even if the expert you're looking at is tipping at better than 60%, that performance still isn't particularly impressive because, in the AFL, there's an even better strategy than reflexively tipping the home team, though its historical performance has been more variable. That strategy is to pick the favourite. Overall, they've won about 65% of the time since the start of 2006. The individual figures for each season are:
  • 2006 - 64.3%
  • 2007 - 64.6%
  • 2008 - 72.0%
  • 2009 - 65.9%
This pick-the-favourite strategy has another endearing feature to go with its sterling performance: it works well from round 1 of the season. For the first four rounds of seasons 2006 to 2009 the strategy has tipped at 68%, and for the first eight it's tipped at 66%. In contrast, a pick-the-home-team strategy has fared badly in the first four rounds of these same seasons, landing only 52% of winners.
So this year, if you insist on trying to out-tip the guys whose salaries depend on their tipping prowess, whenever you're unsure about a tip just plump for the favourite.

 

Monday
Feb012010

Fund Profiles and Recommended Weightings for 2010

A few blogs ago (and isn't that a distinctly 21st-century measure of time?) I revealed some details about the six Funds that will be operating this season. Three Funds are backing up from last season on the strength of double-digit returns, while three more are embarking on their rookie wagering seasons.

Two Funds from 2009 have been delisted, the Chi-Squared Fund and the Line Redux Fund, both due to performances that, analysis suggests, were more likely due to intrinsic incompetence than to transient misfortune.

Here's a summary of the six Funds:

Only the new, Heuristic-based Fund will operate from the start of the season, as all other Funds, experience suggests, need four or five rounds of observation before they become sufficiently learned to be entrusted with money.

The Hope Fund will, as it did last year, sit out the first four rounds of the season, and Prudence, New Heritage, Hope, Shadow and ELO-Line will all refrain from betting until Round 6.

All the heuristics we use for tipping - and now for wagering - assume that teams play every week, so they only produce tips for the home-and-away season. As such, the Shadow and Heuristic-based Funds will stop wagering once the finals roll around. All other Funds will continue to wager through to the end of the season.

This year, a pinch of sophistication - though I guess sophistication should really come in soupcons, not pinches - has gone into the determination of Recommended weightings for some of the Funds. Portfolio analysts use a measure called the Sharpe ratio to rate assets. The Sharpe ratio divides an asset's expected return by a measure of the variability of that return. Variability of return represents risk and additional risk needs to be offset by additional expected return. Put another way, assets with larger Sharpe ratios are to be preferred.

The optimal weightings for the New Heritage, Hope and Prudence Funds were derived by calculating (a simplified version of) the Sharpe ratio for portfolios comprising different mixes of these three Funds, using the wagering and game results data for seasons 2006 to 2009. As it turned out, a one-third / one-third / one-third mix was very close to optimal for these three Funds.

Shadow's, ELO-Line's and the Heuristic-based Fund's weightings were determined with, how do I put this, less scientific rigour. I felt happy giving Shadow the same 20% weighting as New Heritage, Hope and Prudence, largely because ... well, because 15% seemed too little and 25% too much (and we did actually identify Shadow as a potential basis for wagering and tracked its performance last season, so it's existence is not purely a manifestation of glorious hindsight).

That left 20% to be shared between the ELO-based and Heuristic-based Funds and, heck, they're both new and they're both speculative, so I assigned them 10% each. (In this respect, MAFL is a bit like most of the disciplines and systems of beliefs I know: thrillingly deep and cogent in its best parts, but discomfitingly shallow in the gaps.)

Anyway, as ever, please feel free to mix your own brew of Funds for your own portfolio. As you now know, your portfolio choices are unlikely to be any less defensible than mine.

Now let's look at each Fund a little more closely.

The New Heritage Fund

The New Heritage Fund, you might recall, was conceived as an ironic anti-Fund, the yin to the Heritage Fund's yang.

Last season the Fund performed admirably returning around 30%, a performance that it would have matched or nearly matched in each of the previous 3 seasons had it been in operation. What's more, but for a nasty final four weeks of the regular season last year, the Fund could have produced better than a 70% return.

This Fund has cranked out profits due to a relatively high level of wagering activity - it bets on about 70% of games after Round 5 - and large average bet size allied with a solid ability to pick more than two winners for every loser.

Definitely a keeper, I'd contend.

The Prudence Fund

If Prudence played cards it'd look at its hand 4 or 5 times each time before betting and then just bet the minimum. While it'd not make the most of its cards, it'd still grind out a return.

Prudence, on average, bets on about 5 games per round and wagers only about one-half the amount that, on average, New Heritage does. Its win rate is similar to that of New Heritage, so the smaller and fewer wagers it makes translates into consistently lower Fund returns.

Still, an average return of 10% per annum with minimal risk per bet and a four-year unbroken run of profits surely warrants inclusion in anyone's portfolio.

The Hope Fund

The Hope Fund's average wager is about the same size as Prudence's but that's about the extent of the similarities between these two Funds other than that their names are both nouns.

In a typical season Hope will only bet on about one-quarter of the games available to it and these bets will tend to be on teams priced around $3.50 to $4.00. Its average success rate of around 60% is enough to drive its average return to just over 40%, comfortably the highest amongst the three Funds returning from last year.

Last year the Fund's win rate was well below average; a return to trend would be most welcome.

The Shadow Fund

The Shadow Fund can be expected to wager on just over one-half of the games that are available to it and should collect on about 70% of these bets.

Historically it has made returns for entire seasons ranging in size from around 15% to 50% and averaging a little under 40%, partly because its fixed bet size of 5% allows it to convert its win rate into profits.

It is, though, a Fund untested in the wild, so caution should be exercised before approaching it with cash. I suggest you try it on small morsels to begin with.

The Heuristic-based Fund

This is a Fund with an ugly, utilitarian name and a complex wagering strategy, so it starts life with a weighty list in the "cons" column. Foremost amongst the "pros", however, is a string of impressive returns and, on this basis alone, it deserves a run this year.

The Fund has a win rate to rival Prudence's, but bets a little more often and a lot more heavily than Pru does, which has resulted in returns of between about 8% and 80% over the past four seasons.

I did explain the Fund's wagering strategy in an earlier blog and I refer you to that posting for more detail and an example, but I'll summarise the philosophy here. The Heuristic-based Fund chooses an heuristic to follow and wagers on that heuristic's tips when it tips the home team until such time as this strategy has caused it to lose money in two successive rounds.

At that point the Fund looks to see which heuristic would have provided the greatest return from wagering on its home team only tips since the start of the season. It then swaps its allegiance to that heuristic, although a swap might actually be a continuation of supporting the current heuristic if that heuristic has the best season-long record despite having lost money in the two most recent rounds. This new heuristic is then followed until such time as it loses for two consecutive rounds, and so on.

To start off proceedings, the Heuristic-based Fund follows Shadow in Rounds 1 and 2.

(Hmmm ... three paras for a description of a Fund's underlying philosophy. No wonder the Fund has a Recommended weighting of just 10%.)

The ELO-based Fund

Another uninspiringly named Fund, though this time without the attraction of an unblemished record of achievement.

The ELO-based Fund uses as its basis the margins of victory derived from my MARS team ratings.

This Fund, when run using data for the past four seasons, bets about as often as the New Heritage Fund, but has a lower win rate, a smaller average bet size but a higher average price per bet. All up this has produced an average Fund return about the same as that of New Heritage but with much more variability, so variable in fact that this Fund would actually have lost money in 2006 had we been using it.

Doubtless, and probably illogically, I'd have felt differently about allowing this Fund to operate this year had that losing season been more recent, but I now profoundly understand how difficult it is to win money wagering on handicap, so I'm willing to overlook this Fund's juvenile record and focus on its more recent successes. (Psychologists have a name for this bias: it's called the Recency Effect and it is said to afflict anyone who puts too much emphasis on recent events or data. Come to think of it, if the best they can come up with as a name for this phenomenon is "Recency Effect" then I don't feel so bad about "Heuristic-based" and "ELO-based".)

*~*~*~*~~*~*~*~*

That the full skinny on all the Funds.

As always, please remember that past performance is no indicator of future returns.