The MAFL Funds for 2013
It's been about four months since the Swans toppled the Hawks in the grand Granny of 2012, bringing to a close the third successive loss-making season for MAFL - because that, of course, is how seasons are remembered around here - and I'll admit that I was ready when the siren fell silent on that Saturday in September to give MAFL a miss for a while. Say a millennium or two.
Maybe, I thought in darker times, I might make a last, ironic bet on the all-conquering Suns shortly before the heat-death of the universe legally voided all unsettled wagers. (Just who is going to update that Wikipedia entry shortly after the impressive pedigree of its scholarship is revealed to a now chronically lethargic Universe?)
Anyway, with the benefit of a little distance, both in time and in perspective, I've decided to lace up the stats for another season of walking the fine line between pleasure and pain, of predicting results and predicting margins, of making probability assessments and, yes, of wagering, perchance to win.
Which is what this blog is all about. Specifically, the Funds of 2013.
RETURN OF (BUT MAYBE NOT ON) THE FUNDS
New visitors to MAFL might like to review last year's post about the Funds of 2012 before reading further as three Funds very similar to those that operated last year will form the basis of MAFL wagering for the year ahead, so much of that previous post is still relevant.
Broadly, the changes to the Funds for the upcoming season relate to their underlying statistical algorithms and their wagering strategies. If you're short of time, here's the summary:
- As per last year, there will be a Head-to-Head Fund, a Line Fund, and a Margin Fund operating in 2013
- In contrast to last year, all three Funds will be allowed to wager throughout the season, though each will be subjected to varying wager "caps" based on the Fund's historical performance during different portions of the season
- The Head-to-Head Fund will, once again, Kelly stake, only on Home teams and only if they're priced at $1.50 or higher
- The Line Fund will also, once again, Level stake only on Home teams
- The Margin Fund will still rely on the margin predictions of Combo_NN2, but will rely equally on those of Bookie_9, wagering on both Predictors' opinions but only when Combo_NN2 is predicting a Home team victory or a draw
- The Recommended Portfolio will comprise
- 60% Line Fund
- 30% Head-to-Head Fund
- 10% Margin Fund
That's all you need to know to follow the MAFL Funds this season and, if you were following them last season, you'll not notice a great deal of difference.
If you'd like to understand the details of the Funds, including the changes to the underlying algorithms, however, and my rationale for deciding to operating them in the way I've just described, read on.
The Head-to-Head Fund
Over on the Statistical Analyses blog I've been looking at better ways to determine the TAB Bookmaker's true but implicit probability assessments of each team in a contest, the calibration of the TAB Bookmaker, and the predictive performance of the Head-to-Head and Line Fund algorithms, partly on the basis of which I've made two major changes to the way the Head-to-Head Fund will work in 2013 compared with how it worked in 2012:
- The Fund algorithm will now use as an input Bookmaker Implicit Probabilities derived from what I'm calling the Risk-Equalising Approach to assessing them (see the first link above for details). This will replace the existing Bookmaker Implicit Probabilities that were derived from what I'm calling the Overround-Equalising Approach.
In brief, the difference between the two Approaches is that the Risk-Equalising Approach assumes that the TAB Bookmaker embeds overround in the prices of both teams in such a way that it provides for an expected profit assuming a calibration error of the same magnitude for each team. By comparison, the Overround-Equalising Approach assumes that the overround is levied equally on both teams, which has the effect of exposing the bookmaker to setting market prices for underdogs that will have a negative expectation if he's made even small calibration errors in the wrong direction - for example by assessing a team as a 10% chance that is, in reality, an 11% chance.
(As a minor, technical point, last year's Head-to-Head Fund algorithm used a log-transformed version of the probabilities produced through the Overround-Equalising Approach while this year the algorithm will use the untransformed version of the probabilities produced through the Risk-Equalising Approach.) - The Fund will continue to Fractional Kelly Stake, with the fraction again being applied to the original and not the then-current size of the Fund, but varying depending on the round of the season as follows:
- For the first six rounds of the season the Fund's bet size will use a divisor for the Kelly stake of 20. Put another way, the Fund will bet about one-quarter of what it would otherwise wager in those rounds of the season where it has, historically, excelled
- Between rounds 7 and 12, the Fund's bet size will use a divisor of 10
- Between rounds 13 and 23, the divisor will be 5, as this is the part of the season where, historically, the Fund algorithm has done well
- For rounds 24 and beyond, the divisor will revert to 10.
These divisors reflect the relative calibration of the TAB Bookmaker at various points of the season, and also the ROIs that would have been recorded by this new version of the Fund had it been using the same divisor for every round (assuming that the Fund wagered only on Home teams and only then if that team's price was $1.50 or higher), as shown in the table below:
Other divisions of the season could easily be justified on the basis of this table, but the broad cut-off points I've chosen seem to have some empirical validity.
In deciding to adopt this new wagering approach I was mindful that I'm flirting with a form of hindsight bias - which didn't entirely dissuade me from the path, but did make me opt for as few divisions of the season as appeared warranted.
For some time I've recognised that the first 6 rounds or so of any season are rounds where many of the MAFL models appear to be recalibrating from the previous season, so it makes sense to wind back our exposure at these times. Previously my response has been to not wagered at all in these rounds, but a fresh analysis reveals that, in some years, there is opportunity in this part of the season, often in those years where there turns out to be less opportunity at other points in the season.
With a divisor of 20, this is the part of the season where any head-to-head wagers are likely to be small.
Rounds 7 to 12 comprise, roughly, the second quarter of the home-and-away season. Head-to-head wagering can be a struggle in this part of the season though ultimately a mildly profitable one, due mostly to what's happened in Round 7s. Accordingly, we'll allow ourselves to consider wagering a little more across these rounds by ratcheting the divisor down to 10.
The next part of the season, from Rounds 13 to 23, is where we'd hope to make the majority of any money we will make on head-to-head wagering in a typical season. Across these 11 rounds, only Round 14 would have failed to be profitable in aggregate using the new Head-to-Head algorithm across seasons 2007 to 2012. This is, therefore, the part of the season where we crank up the wagering to its maximum by dropping the divisor to 5, which is the value of used in previous seasons for the entire period when the Head-to-Head Fund was active.
Lastly, we get to Rounds 24 and above, which have overall been bad news for this new algorithm. That said, there have only been 11 wagers during this period so I'm not entirely convinced that the -10% ROI result is reflective of the algorithm's underlying potential - I know, for example, that head-to-head Funds from earlier MAFL years have performed well during the Finals. Accordingly, I've set the divisor for these rounds to be the same as that for Rounds 7 to 12 (ie 10).
Had this Fund been operating with these divisors over the past six seasons, its performance would have been as tabled at right.
In brief, it would have made a profit in every year with an average turn of about 1.2 times, an average ROI of almost 15% and, therefore, a typical RONF of over 20%.
As usual, of course, I'm making no promises, explicit or implied, that future performance will be faithfully mirrored in the past ...
This year, the Head-to-Head Fund will carry a 30% weighting in the Recommended (aka Only) Portfolio. There's no real empirical or other logical basis for this other than it reflecting, in some vague sense, my opinion of the Fund's likely volatility relative to the other Funds.
To estimate the value of switching to the the Risk-Equalising from the Overround-Equalising Approach I calculated the returns that would have been achieved by a version of the Head-to-Head Fund differing only in that it used the Implicit Probabilities from the Overround-Equalising rather than the Risk-Equalising Approach. It would have wagered with almost exactly the same frequency in each of the six seasons - switching Approaches doesn't change the model's probability estimates by that much - but it would have wagered a little less in dollar terms and produced smaller ROIs in every season except 2008 when it would, in any but a pedant's pedant view, have produced the same ROI. Overall, its ROI across the six seasons would have been 14.2%, compared with the 14.7% shown here for the new Head-to-Head Fund.
So, it seems that the switch to the Risk-Equalising Implicit Probability estimates represents a small improvement. I doubt that it will be the difference between profit and loss but I'm taking any edge I can find.
The Line Fund
The Line Fund, like the Head-to-Head Fund, will broadly operate as it did last season, Level-stake wagering and only on Home teams. It too will have only a couple of major changes, brought on by the same analyses linked to earlier:
- The underlying Fund algorithm, which previously used no Implicit Probability variable at all as an input, will now employ the Implicit Probabilities derived from the Risk-Equalising Approach.
- The proportion of the Fund wagered on any single bet will vary depending on the round of the season as follows:
- For the first five rounds of the season the Fund's bet size be 1.25% of the Fund
- Between Rounds 6 and 11, the Fund's bet size will be 2.5% of the Fund
- Between Rounds 12 and 18, the Fund's bet size will be 5% of the Fund
- Between Rounds 19 and 23, the Fund's bet size will be 2.5% of the Fund
- For Rounds 24 and beyond, the divisor will revert to 0.625% of the Fund.
These divisors are based on the historical performance of the new Line Fund algorithm in each round of the season, which I've summarised in the table below.
This table makes it especially clear why I've maximised the bet sizes for Rounds 12 to 18 and reduced them for other rounds, especially for Rounds 24 and onwards.
What it doesn't reveal, however, is my reasoning for significantly reducing the bet size for Rounds 1 to 5, which I did based on a review of how changing bet sizes affected not just profit in aggregate measured across all six seasons from 2007 to 2012, but also how it affected profit in each season considered separately.
Informally, I was seeking a set of bet sizes that produced steady profits in each season rather than a set that induced booms and busts from one season to the next, even if the more volatile performance might, in aggregate, be more profitable. In this way I hope to be exploiting real underlying differences in the model's abilities across the different parts of the season rather than tuning bet sizes based on an historic and non-repeatable bout of luck in a round or two for a season or two.
That may, of course, turn out to be a distinction without a difference, but that's a judgement we can look forward to making in about 10 months time.
Speaking of bouts of luck, the performance of the new Line Fund across Rounds 12 to 18 is an astonishing 116 and 67, which is some 25 correct predictions better than random chance. To be fair, I have isolated these rounds from the less-exceptional nearby rounds and am therefore guilty of the fruitiest of cherry-picking, but the fact that they are contiguous lends some weight to the view that the models do take time to calibrate to a season, and do have periods of time when they are far more wager-worthy than at other points of the season.
On the flipside, I was equally as surprised to see how poorly the new Line Fund performed towards the back end of the season, and especially around Finals time. Again though, the number of wagers is small, so the performance is subject to significant sampling variability. At 13 and 9 it is, after all, just two predictions shy of an as-good-as-chance performance.
Had the new Line Fund been wagering across the past six seasons with the variable bet sizes in place as described, its in-market performance would have been as shown in the following table.
It would, most notably, have made a profit in every season, which is pretty much the pre-interview screening question that I put to most intending MAFL Funds. Like many an outstanding interviewee however, some Funds' on-the-job performances have been remarkably at odds with their impeccable resumes and glowing references.
In a typical season the Fund would have made a bit over 100 bets winning about 56% of them, each bet representing, on average, about 2.9% of the Fund.
Those numbers mean that the Fund would have turned a little over 3 times each season and so, with an ROI of around 11%, would have recorded an average RONF of about 35%.
In recognition of that overall performance and the relative consistency of it across all six seasons - and a possibly misguided but by no means unprecedented belief that it will continue - the Line Fund will carry a weighting of 60% this season in the Recommended Portfolio.
I also reran the new Line Fund algorithm without the Implicit Probabilities from the Risk-Equalising Approach. The ROI for the six seasons taken as a whole was 10.6%, which is not quite 1% point smaller, while the ROI for four of the seasons taken separately was higher and for the remaining two seasons was lower. Most compelling for me was that the algorithm without the Risk-Equalising Implicit Probabilities made a loss in 2012. That is, almost always, grounds for dismissal.
The Margin Fund
Let me start by being unhelpfully honest: at the end of season 2012, I wasn't at all sure what to do with the Margin Fund. How do you solve a problem like The Margin Fund?
The possibilities are: leave it alone, tweak it a little, or let it go gentle into that good night like so many Funds before it.
Taking those options in reverse order, it felt a little premature to exit the Margin wagering market so soon after entering it. By its nature, wagering in this market where you'll typically be accepting wagers with odds of 8/1 or longer will almost inevitably be subject to significant variability. A goal or two in a game or two here or there could well be enough to flip a Fund from red ink to black.
Last season, the Margin Fund finished with an unbroken string of 39 losses, a run far longer than even an avaricious TAB Bookmaker could reasonably have felt entitled to expect. This streak left the Fund with a 10 and 104 record for the year, which was sufficient to extinguish all but 12.5% of the Fund's original investment. In total, 24 of those 104 losses were by a single bucket and so could not fairly be characterised as wasted wagers. Nudging just 3 or 4 of them into the desired bucket would have been sufficient to see the Fund record a profit.
The loss-making Margin Fund of 2012 placed its wagers based on the margin predictions of the Combo_NN_2 algorithm but would, with the oft-useless benefit of hindsight, have been better advised to instead use the margin predictions of any of the various Head-to-Head Margin Predictors. These four Predictors, wagering as the incumbent Margin Fund only when they predicted a Home team victory, would have produced ROIs in the Margin market of around 35% as a consequence of selecting the correct Margin bucket on 22 occasions. If we're to tweak the Margin Fund then, it's tempting to switch it from relying on the margin predictions of Combo NN_2, which were remarkably aprescient last season, to those of one of the Head-to-Head Margin Predictors.
We've no good reason though to believe that the sterling performances of the Head-to-Head Margin Predictors were anything more than one-off exceptions, prime candidates for regression toward the mean. Reviewing the historical performances of the Margin Predictors in the Margin market only lends credence to this worldview.
I should explain a few things about the Margin Predictors whose performance is summarised in this table before I move on. Firstly, all of the Bookmaker-based Margin Predictors (BA, B3 and B9) and the Head-to-Head-based Margin Predictors (HU3, HU10, HA3 and HA7) in this table now use the Risk-Equalising Implicit Probabilities as inputs rather than the Overround-Equalising Implicit Probabilities. As well, the two ProPred-based (PP3 and PP7) and WinPred-based (W3 and W7) Margin Predictors now use a log-transformed version of the Risk-Equalising Implicit Probabilities.
Combo_7 and Combo_NN_1, each being combinations of the other Margin Predictors also now use the Risk-Equalising Implicit Probabilities by proxy, leaving Combo_NN_2 as the only Margin Predictor unchanged by MAFL's move from Overround-Equalising to Risk-Equalising Implicit Probabilities as it uses no Bookmaker probabilities in its decision-making. Over on the Statistical Analyses blog I will be posting about the improvements or otherwise in the Margin Predictors' performances as a consequence of the move to Risk-Equalising Implicit Probabilities.
The one column from the table above that I've not discussed so far is the last one, which represents the performance of a hybrid Predictor that has two guesses at the outcome of every game, one guess based on Bookie_9's opinion, and the other based on Combo_NN_2's. Not surprisingly, with an extra guess in every game - even though both guesses are sometimes the same - this hybrid is correct more often than any of the mono-guessing Predictors. It's this hybrid Predictor that I'll be using the Margin Fund in 2013.
Whether or not this extra guess, and the accompanying increase in the proportion of games in which the correct bucket will wind up being selected, will prove to be profitable is something that's hard for me to conjecture about. My history of payouts for the Margin market extends only to season 2012 and I can say that, had the new Margin Fund operated in that season, it would not have been profitable come the end of the season, despite being mildly in black ink as late as Round 19. It's ultimate ROI of -13.3% compares favourably with that of the actual MAFL Margin Fund of 2012, which was -30.7%, though that's a little like saying that finishing 13th on the competition ladder is better than finishing 15th.
One interesting characteristic of the hybrid model is how often the underlying algorithms agree, which occurs in just 26% of games overall, and just 20% of the games where Combo_NN_2 predicts a Home team win or draw.
To continue the workplace analogy from earlier in this blog, taking on this new Margin Fund essentially amounts to hiring and assigning a more experienced mentor from outside the company (in this case Bookie_9) to a recruit that performed mostly poorly during its internship (here Combo_NN_2) in the hope that, together, the two will make better commercial decisions, sufficient to cover the fact that you're now paying both their salaries.
Reviewing the historical performance of this odd couple we find that their combined decision-making has been best in the first half of the home-and-away season, a little weaker in the second half, and weakest of all in the Finals. The strong showing in the early parts of the season for the pair, which contrasts with that of the Head-to-Head and Line Funds, is I think a reflection of the higher levels of calibration of the TAB Bookmaker in the earlier rounds of seasons, which is picked up in the Margin Fund by including the predictions of Bookie_9.
This all feels very speculative to me, so it seems prudent to limit the damage that this new Margin Fund might do. Accordingly it will carry a weighting of just 10% in the Recommended Portfolio.
Summary
The following table describes how each of the three MAFL Funds will operate in season 2013. The darker the red, the larger will be the maximum wager place by the Fund in question:
Reader Comments