Search MAFLOnline
Subscribe to MAFL Online

 

Contact Me

I can be contacted via Tony.Corke@gmail.com

 

Latest Information


 

Latest Posts

Entries from March 1, 2010 - March 31, 2010

Monday
Mar222010

A Proposition Bet Walks Into a Bar ...

There's time for another quick paradox before the season commences.

Consider the following proposition bet. Each week we'll look at the total points scored in the first game of the round and look at whether the total is even or odd. I win if, across consecutive rounds, the sequence (even,odd,odd) occurs before the sequence (even,odd,even) and you win if the converse occurs. So, for example, if the total scores in the first game of Rounds 1, 2 and 3 were (146, 171, 155) then I'd win. If, instead, they were (132, 175, 162) then you'd win. If no result had been achieved after Round 3 then we'd keep going, starting with the aggregate score for game 1 of Round 4, until one or other of the winning sequences occurred.

Now were I to bet you that my sequence would occur before yours it probably wouldn't surprise you to learn that this is a fair bet at even money odds (recognising that points aggregates for games are as likely to be odd as to be even). But what if, instead, I said that we would play this game repeatedly over the next 3 seasons, with each game ending and another commencing only once both sequences had occurred and that the overall winner of the bet would be the person whose sequence had, on average across all of the completed games, taken the fewest number of games to occur?

So, for example, we might over the course of the three seasons complete 6 games, with my sequence taking 8,6,9,7,11 and 9 games to occur and yours taking 10,11,8,6,10 and 15 games to occur. The average time for my sequence is 50/6 = 8.33 and for your sequence is 60/6 = 10 so, in this case, I'd win.

Given that it's an even money bet whose sequence occurs first would you also be willing to accept even money odds that the average number of games it takes for my sequence to occur will be less than the average number of games it takes for your sequence to occur?

Well if you would, you shouldn't. On average my sequence will take 8 games to occur and yours will take 10 games.

The reason for this apparently paradoxical result is subtle and hinges on how much longer, on average, it takes for the losing sequence to occur after the winning sequence has just been completed. If your sequence - (even,odd,even) - has just occurred then I'm already one-third of the way to completing my sequence of (even,odd,odd), but if my sequence has just occurred then the last result was a game with an odd number of points, so you're still at least three games away from completing your sequence. When you do the maths it turns out that this makes the average number of games required to generate your sequence equal to 10 games while it's only 8 games for my sequence. This despite the fact that it's an even money bet whose sequence turns up first.

You might need to run some simulations with a coin to convince yourself of this, but it is true. A discussion of the result is included in this TED talk from Peter Donnelly. (There are some other fantastic talks on the TED site. While you're visiting you might also want to take a look at the talks by Elizabeth Gilbert, Ken Robinson, Malcolm Gladwell, and a stack of others.)

This result is not, I'll acknowledge, a cracking way to win bar bets - unless, I suppose, you're contemplating a long session, but still expecting to remain sufficiently clear-headed to track a hundred or so coin tosses and to, frankly, give a proverbial about the outcome - but it does have a geeky charm to it.

Saturday
Mar202010

We Don't Need No 37-cent Piece (But 30- and 45-cent Pieces Might be Nice)

Last night I was reading this Freakonomics blog explaining why a 37-cent piece would make for more efficient US coinage. In the article the question asked was what set of 4 different coin denominations could most efficiently be used to make up any amount between 1c and 99c. Two equally efficient answers were found: a set comprising a 1-cent, 3-cent, 11-cent and 37-cent piece, and one comprising a 1-cent, 3-cent, 11-cent and 38-cent piece. Either combination can be used to produce any total between 1c and 99c using, on average, just 4.1 coins.

Well Australia's different from the US in oh so many ways, and one of those ways is relevant for the present topic: we round all amounts to the nearest 5 cents, having disposed of the 1- and 2-cent pieces in 1991.

So, I wondered, what set of 4 coins would most efficiently meet our needs.

Just so you're clear what I'm on about, consider the 4 coin set comprising a 5 cent, 10 cent, 70 cent and 90 cent piece. A transaction totalling 5 cents can be met with just 1 coin, a transaction of 10 cents can also be met with just one coin, and a transaction of, say, 65 cents can be most efficiently met with 7 coins (1 5-cent piece and 6 10-cent pieces). To determine the overall efficiency of the (5,10,70,90) coin set we calculate how many coins would be needed to meet each transaction size from 5 cents to 95 cents (in 5 cent increments) and we average each of the 19 estimates so obtained. (The answer is 3.16 coins per transaction for this particular combination of denominations, which makes it a fairly inefficient combination, not surprising given that the 70- and 90-cent pieces are useful in so few of the 19 transaction sizes.)

For Australian conditions, it turns out, we'd need to substitute at least two of our current four sub-dollar denominations (viz the 5c, 10c, 20c and 50c pieces) to create an optimal set. Nine solutions are all equally efficient, each requiring an average of about 2.11 coins to meet every amount from 5 cents to 95 cents incrementing in 5 cent lots. The optimal coin sets are:
  • A 5,10,30 and 45 cent solution
  • A 5,15,20 and 45 cent solution
  • A 5,15,35 and 40 cent solution
  • A 5,15,35 and 45 cent solution
  • A 5,15,35 and 60 cent solution
  • A 5,15,40 and 45 cent solution
  • A 5,20,30 and 65 cent solution
  • A 5,20,35 and 45 cent solution
So, Aussies don't need to consider a 37-cent coin, we need to ponder 15-, 35-, 40- and 45-cent coins.

Maybe that's a little too much change (if you'll forgive the dreadful pun). As I noted earlier, each of the optimal solutions listed above necessitates our changing at least two of our existing coin denominations. If you'd prefer that we change only one, the best solution is the (5,10,20,45) set, which is only marginally less efficient than the optimum, requiring an average of 2.21 coins for each transaction in the 5 cent to 95 cent range.

Another sub-optimal but, I contend, attractive option is the (5,10,30,75) set, which needs an average of only 2.16 coins per transaction and which includes a 75-cent piece that would surely come in handy for purchases over a dollar.

Finally, you might be curious how inefficient our current (5,10,20,50) coin set is. It's not too bad, requiring 2.32 coins per transaction, which makes it about 10% less efficient than the optimum.

So the next time you're weighed down with a purse, wallet or pocket full of coins, just think how much more efficient it would be if some of those coins were 30 and 45 cent pieces (and think how much more fun it would be waiting for someone at the checkout to pause, look skyward, give up and then scan a reference sheet to find out how to provide you with the correct change).

Thursday
Mar112010

A Paradox, Perhaps to Ponder

Today a petite blog on a quirk of the percentages method that's used in AFL to separate teams level on competition points.

Imagine that the first two rounds of the season produced the following results:

Geelong and St Kilda have each won in both rounds and Geelong's percentage is superior to St Kilda's on both occasions (hence the ticks and crosses). So, who will be placed higher on the ladder at the end of the 2nd round?

Commonsense tells us it must be Geelong, but let's do the maths anyway.

  • Geelong's percentage is (150+54)/(115+32) = 138.8
  • St Kilda's percentage is (75+160)/(65+100) = 142.4

How about that - St Kilda will be placed above Geelong on the competition ladder by virtue of a superior overall percentage despite having a poorer percentage in both of the games that make up the total.

This curious result is an example of what's known as Simpson's paradox, a phenomenon that can arise when a weighted average is formed from two or more sets of data and the weights used in combining the data differ significantly for one part compared to the remainder.

In the example I've just provided, St Kilda's overall percentage ends up higher because its weaker 115% in Round 1 is weighted by only about 0.4 and its much stronger 160% in Round 2 is weighted by about 0.6, these weights being the proportions of the total points that St Kilda conceded (165) that were, respectively, conceded in Round 1 (65) and Round 2 (100). Geelong, in contrast, in Round 1 conceded 78% of the total points it conceded across the two games, and conceded only 22% of the total in Round 2. Consequently its poorer Round 1 percentage of 130% carries over three-and-a-half times the weight of its superior Round 2 percentage of 169%. This results in an overall percentage for Geelong of about 0.78 x 130% + 0.22 x 169% or 138.8, which is just under St Kilda's 142.4.

When Simpson's paradox leads to counterintuitive ladder positions it's hard to get too fussed about it, but real-world examples such as those on the Wikipedia page linked to above demonstrate that Simmo can lurk within analyses of far greater import.

(It'd be remiss of me to close without noting - especially for the benefit of followers of the other Aussie ball sports - that Simpson's paradox is unable to affect the competition ladders for sports that use a For and Against differential rather than a ratio because differentials are additive across games. Clearly, maths is not a strong point for the AFL. Why else would you insist on crediting 4 points for a win and 2 points for a draw oblivious, it seems, to the common divisor shared by the numbers 2 and 4?)

Saturday
Mar062010

Improving Your Tipping

You're out walking on a cold winter's evening, contemplating the weekend's upcoming matches, when you're approached by a behatted, shadowy figure who offers to sell you a couple of statistical models that tip AFL winners. You squint into the gloom and can just discern the outline of a pocket-protector on the man who is now blocking your path, and feel immediately that this is a person whose word you can trust.

He tells you that the models he is offering each use different pieces of data about a particular game and that neither of them use data about which is the home team. He adds - uninformatively you think - that the two models produce statistically independent predictions of the winning team. You ask how accurate the models are that he's selling and he frowns momentarily and then sighs before revealing that one of the models tips at 60% and the other at 64%. They're not that good, he acknowledges, sensing your disappointment, but he needs money to feed his Lotto habit. "Lotto wheels?" , you ask. He nods, eyes downcast. Clearly he hasn't learned much about probability, you realise.

As a regular reader of this blog you already have a model for tipping winners, sophisticated though it is, which involves looking up which team is the home team - real or notional - and then tipping that team. This approach, you know, allows you to tip at about a 65% success rate.

What use to you then is a model - actually two, since he's offering them as a job lot - that can't out-predict your existing model? You tip at 65% and the best model he's offering tips only at 64%.

If you believe him, should you walk away? Or, phrased in more statistical terms, are you better off with a single model that tips at 65% or with three models that make independent predictions and that tip at 65%, 64% and 60% respectively?

By now your olfactory system is probably detecting a rodent and you've guessed that you're better off with the three models, unintuitive though that might seem.

Indeed, were you to use the three models and make your tip on the basis of a simple plurality of their opinions you could expect to lift your predictive accuracy to 68.9%, an increase of almost 4 percentage points. I think that's remarkable.

The pivotal requirement for the improvement is that the three predictions be statistically independent; if that's the case then, given the levels of predictive accuracy I've provided, the combined opinion of the three of them is better than the individual opinion of any one of them.

In fact, you also should have accepted the offer from your Lotto-addicted confrere had the models he'd been offering each only been able to tip at 58% though in that case their combination with your own model would have yielded an overall lift in predictive accuracy of only 0.3%. Very roughly speaking, for every 1% increase in the sum of the predictive accuracies of the two models you're being offered you can expected about a 0.45% increase in the predictive accuracy of the model you can form by combining them with your own home-team based model.

That's not to say that you should accept any two models you're offered that generate independent predictions. If the sum of the predictive accuracies of the two models you're offered is less than 116%, you're better off sticking to your home-team model.

The statistical result that I've described here has obvious implications for building Fund algorithms and, to some extent, has already been exploited by some of the existing Funds. The floating-window based models of HELP, LAMP and HAMP are also loosely inspired by this result, though the predictions of different floating-window models are unlikely to be statistically independent. A floating-window model that is based on the most recent 10 rounds of results, for example, shares much of the data that it uses with the floating-window model that is based on the most recent 15 rounds of results. This statistical dependence significantly reduces the predictive lift that can be achieved by combining such models.

Nonetheless, it's an interesting result I think and more generally highlights the statistical validity of the popular notion that "many heads are better than one", though, as we now know, this is only true if the owners of those heads are truly independent thinkers and if they're each individually reasonably astute.